161 Commits
1.6 ... main

Author SHA1 Message Date
Toni Uhlig
d629fda779 bump libnDPI to 75db1a8a66476b3c16cc1a8bf63ca2b0e2fba3ed
* incorporate upstream changes:
    - nDPI supports build directories now
    - set memory wrapper
    - classification states
    - process packet signature change

 * disabled fuzz-* test pcaps
    - cause timestamp diff's for some libpcap builds

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-11-18 09:54:15 +01:00
Toni Uhlig
643aa49d34 bump libnDPI to e9751cec26d80fe2d88706d4f7521a63ec12b3bb
* incorporate replacement of "TLS Susp ESNI Usage" with "Mismatching Protocol with server IP address"

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-10-29 13:51:07 +01:00
Toni Uhlig
8dfaa7c86c Fix CI
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-10-24 08:20:22 +02:00
Toni Uhlig
59caa5231e Dockerfile: build for ArchLinux as well
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-24 09:39:51 +02:00
Toni Uhlig
9c0f5141bc Fix "Potentially Dangerous" breed in c-notifyd
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-23 12:05:41 +02:00
Toni Uhlig
e8ef267e0a bump libnDPI to 560a4e4954e2db38d995d3cba2c1dcc4276f92d5
* fix some SonarCloud issues

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-17 10:37:51 +02:00
Toni Uhlig
2651833c58 CMake/CI: more robust against deprecations
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-16 11:19:02 +02:00
Toni Uhlig
bd7df393fe CI: ENABLE_CRYPTO for some builds
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-16 10:34:46 +02:00
Toni Uhlig
88cfecdf95 Remove CMake limitation
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 16:34:19 +02:00
Toni Uhlig
a91aab493c fixed spelling issue
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 15:09:18 +02:00
Toni Uhlig
fe42e998d0 fixed SonarCloud issues
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
22e44c1e0b removed crypto example
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
d8cad33a70 restored nio code
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
37989db0bb make TLS handshakes great again
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
19f80ba163 Added TLS ncrypt I/O
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
c8c58e0b16 nDPId crypto handshake done
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
6d3dc99fad Switch to OpenSSL for all crypto stuff
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
b8d3cf9e8f Added send packets with type i.e. keyex / json-data
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
510b03cbcd Added preps for different packet types + AAD (type+size)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
66aca303b6 Added HKDF to uniform distirbute a X25519 shared key
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
0e7e5216d8 Added preps for AAD/KeyEx
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
7ab7bb3772 Added some stats printing to c-decrypt
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
a47bc9caa3 Modified crypto to support multiple peers (multiple sender / multiple receiver) per ncrypt context
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:47 +02:00
Toni Uhlig
7d94632811 nDPId decryption example
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:46 +02:00
Toni Uhlig
2c81f116bf nDPId decryption example
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:46 +02:00
Toni Uhlig
49b058d2d3 Updated OpenWrt In-Source build patch
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:46 +02:00
Toni Uhlig
fea52d98ca Added nDPId decryption example
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:46 +02:00
Toni Uhlig
02b686241e initial nDPId UDP crypto [WiP!]
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 12:33:46 +02:00
Toni Uhlig
2cb0d7941b Improved/Updated Grafana Dashboard
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 10:22:17 +02:00
Toni Uhlig
97e60ad7ec Add security vuln reporting guide
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-15 08:07:48 +02:00
Toni Uhlig
eea5a49638 Fixed some example inconsistencies due to recent libnDPI / nDPId updates
* removed unused, unmaintained and erroneous py-flow-dashboard
 * adjusted Grafana dashboard flow breeds (flow categories will be done separately)
 * (C) update (a bit late)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-10 09:54:40 +02:00
Toni Uhlig
a9934e9c9e Removed nDPI/nDPId version/api serialization for nDPId-test to reduce result diff's
* fixed some SonarCloud complains

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-09 12:52:58 +02:00
Toni Uhlig
644fa2dfb3 bump libnDPI to 1c1894720e3827857cfe1afd19bb7fb4618ee594
* fixes a build error with clang on ubuntu due to missing `static inline`s in header files

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-09 12:05:26 +02:00
Toni Uhlig
1a6b1feda9 Print NDPI_(C|LD)FLAGS
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-09 12:05:26 +02:00
Toni Uhlig
648dedc7ba bump libnDPI to 70536876f2f97b977ed43474872195bf756de67d
* fixes upstream compilation warning due to string truncation

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-09 12:05:26 +02:00
Toni Uhlig
19036951c7 bump libnDPI to 1216ec6a2719408a487f696f5b601bdb9eec727d
* incorporated upstream API changes related to detection protocol bitmasks
 * added missing flow detection categories

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-09-09 12:05:26 +02:00
Toni Uhlig
4e7e361d84 bump libnDPI to f8869cd670adc439cc41bde0bd04960e1befafc5
* fix API issue due to changed name of a public struct

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-20 11:05:53 +02:00
Toni Uhlig
9809ae4ea0 rs-simple: improved readability and stability
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-20 09:55:21 +02:00
Toni Uhlig
97387d0f1c rs-simple: added argh command line parser and "stable" flow table index
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-18 14:58:32 +02:00
Toni Uhlig
46ef266139 rs-simple: added DaemonEventStatus deserialization and statistics mgmt
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-16 17:48:51 +02:00
Toni Uhlig
ae6864d4e4 CI: build Rust examples
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-15 09:23:35 +02:00
Toni Uhlig
f3c8ffe6c1 rs-simple: added first/last seen and timeout in
* prettify unit's

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-15 08:10:14 +02:00
Toni Uhlig
07d6018109 rs-simple: make primitive flow table work
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-14 12:36:38 +02:00
Toni Uhlig
dd909adeb8 rs-simple: add flow mgmt w/ TTL hash maps (moka-future)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-05-03 15:22:57 +02:00
Toni Uhlig
8848420a72 CI: use FreeBSD vmactions main branch
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-04-30 23:00:53 +02:00
Toni Uhlig
f8181d7f6a Fix CI build with PF_RING (build userspace lib only)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-04-30 22:33:51 +02:00
Toni Uhlig
b747255a5d Add simple rust example (WiP)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-04-30 22:05:52 +02:00
Toni Uhlig
a52a37ef78 Fix CI
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-04-17 11:00:27 +02:00
Toni Uhlig
ae95c95617 bump libnDPI to c49d126d3642d5b1f5168d049e3ebf0ee3451edc
* fix API issue with a changed function signature

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-03-06 19:00:23 +01:00
Toni
42c54d3755 Initial tunnel decoding (GRE - Layer4 only atm) (#55)
Initial tunnel decoding (GRE - Layer4 only atm). Fixes #53
 * make finally use of the thread distribution seed
 * Handle GRE/PPP subprotocol the right way
 * Add `-t` command line / config option
 * Removed duplicated and obsolete IP{4,6}_SIZE_SMALLER_THAN_HEADER which is the same as IP{4,6}_PACKET_TOO_SHORT
 * Updated error event schema

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-02-25 15:17:16 +01:00
Toni Uhlig
bb870cb98f Add FreeBSD CI build
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-02-01 10:54:27 +01:00
Alex Eganov
e262227d65 Fix missing header file for build on freebsd (macos) (#60) 2025-01-31 23:02:13 +01:00
Toni Uhlig
899e5a80d6 CI: Fixed config tests
* set max dots per line to improve CI output
 * commented `flow_risk.crawler_bot.list.load` out

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-01-26 09:58:22 +01:00
Toni Uhlig
053818b242 CI: Added libnl-genl-3-dev to PF_RING build
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-01-26 07:59:55 +01:00
Toni Uhlig
4048a8c300 Set minimal required nDPI version to 4.14 (tarball) and 4.13 (git)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-01-26 01:10:30 +01:00
Toni Uhlig
09b246dbfa Temp disable flow_risk.crawler_bot.list.load in default config file
* currently broken in upstream

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-01-26 01:00:18 +01:00
Toni Uhlig
471ea83493 bump libnDPI to e946f49aca13e4447a7d7b2acae6323a4531fb55
* incorporated upstream changes

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2025-01-25 10:07:25 +01:00
Toni Uhlig
064bd3aefa fix config header
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-12-09 11:26:45 +01:00
Toni Uhlig
acd9e871b6 Added --no-blink and --hide-risk-info
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-12-09 11:09:34 +01:00
Toni Uhlig
b9465c09d8 Increased maximum value for max-flows-per-thread to 65k
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-12-03 21:02:24 +01:00
Toni Uhlig
3a4b7b0860 CI: make dist test (extract archive, run CMake)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-20 13:40:14 +01:00
Toni Uhlig
34f01b90e3 Fixed CMake warnings
* `make dist`: improved libnDPI git version naming

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-20 12:05:03 +01:00
Toni Uhlig
7b91ad8458 Added script to warn a user about issues regarding wrong umask and CPack
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-20 11:01:01 +01:00
Toni Uhlig
442900bc14 Dockerfile update
* gitlab-ci runner fix (single runner / multiple jobs)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-18 14:44:44 +01:00
Toni Uhlig
0a4f3cb0c8 Fix Gitlab CI build for some runners
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-18 13:51:06 +01:00
Toni Uhlig
4bed2a791f CMake/RPM integration
* CI integration
 * RPM (un)install scripts

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-17 17:12:06 +01:00
Toni Uhlig
1aa7d9bdb6 nDPId daemon status event: serialize nDPI API version + Size/Flow
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-17 13:12:33 +01:00
Toni Uhlig
bd269c9ead Added global stats diff test
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-14 14:33:27 +01:00
Toni Uhlig
7e4c69635a Use chmod_chown() API from utils
* `chmod_chown()` returns EINVAL if path is NULL

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-14 13:47:46 +01:00
Toni Uhlig
9105b393e1 Fixed some SonarCloud issues
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-14 10:21:35 +01:00
Toni Uhlig
9efdecf4ef bump libnDPI to 59ee1fe1156be234fed796972a29a31a0589e25a
* set minimum nDPI version to 4.12.0 (incompatible API changes)
 * fixed `ndpi_debug_printf()` function signature
 * JSON schema (flow): added risk `56`: "Obfuscated Traffic"
 * JSON schema (flow): added "domainame"
 * fixed OpenWrt build

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-13 17:23:31 +01:00
Toni Uhlig
8c114e4916 cosmetics
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-10 13:43:26 +01:00
Toni Uhlig
a733d536ad Added env check NDPID_STARTED_BY_SYSTEMD to prevent logging to stderr in such a case
* removed `nDPId` shutdown on poll/epoll error
 * fixed `chmod_chown()` rv check

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-07 11:32:42 +01:00
Toni Uhlig
9fc35e7a7e Add NUL to risks, not needed but better be safe then sorry
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-05 14:20:30 +01:00
Toni Uhlig
ce9752af16 Fixed some SonarCloud issues
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-05 13:43:23 +01:00
Toni Uhlig
f7933d0fdb Slightly unified C example's logging
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-05 12:48:36 +01:00
Toni Uhlig
d5a84ce630 Temporarily disabled some OpenWrt builds
* See: https://github.com/openwrt/gh-action-sdk/issues/43

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-05 10:24:17 +01:00
Toni Uhlig
ce5f448d3b Switched OpenWrt GitHub Actions SDK to main branch
* fixed some SonarCloud complaints
 * added more systemd CI tests
 * fixed debian package scripts to obey remove/purge
 * changed `chmod_chown()` error handling

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-02 18:36:54 +01:00
Toni Uhlig
2b48eb0514 Added vlan_id dissection of the most outer (first) 802.1Q header. Fixes #50
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-02 15:48:45 +01:00
Toni Uhlig
ddc96ba614 Adjusted SonarCloud config and CI
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-11-02 12:05:07 +01:00
Toni Uhlig
7b2cd268bf Updated JSON schema files and a test to make use of the UUID feature.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-29 15:25:19 +01:00
Toni Uhlig
817559ffa7 Set an optional UUID used within all events (similar to the "alias").
* added default values to usage
 * UUID can be either read from a file or used directly from option value
 * adjusted example config file

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-29 12:12:02 +01:00
Toni Uhlig
25944e2089 Fixed some SonarCloud issues
* fixed dependabot werkzeug (3.0.3 to 3.0.6)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-26 11:35:30 +02:00
Toni Uhlig
5423797267 Added nDPId ndpi_process_packet() LLVM fuzzer
* replaced dumb `dumb_fuzzer.sh`
 * fixed nDPId NULL pointer deref found by fuzzer
 * nDPI: `--enable-debug-build` and `--enable-debug-messages` for non release builds
 * nDPI: do not force `log.level` to `3` anymore, use config value instead

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-24 15:45:04 +02:00
Toni Uhlig
7e126c205e Added additional (libnDPI) config files for test runs.
* redirect `run_tests.sh` stderr to filename which prepends config name

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-23 14:27:07 +02:00
Toni Uhlig
7d58703bdb Removed ENABLE_MEMORY_STATUS CMake option as it's now enabled for **all** builds
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-22 13:12:59 +02:00
Toni Uhlig
ae36f8df6c Added libnDPI global context init/deinit used for cache mgmt.
* support for adding *.ndpiconf for nDPI config tests
 * all other configs should have the suffix *.conf
 * fixed nDPI malloc/free wrapper set (was already too late set)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-21 16:17:01 +02:00
Toni Uhlig
8c5ee1f7bb Added config testing script.
* nDPId-test may now make use of an optional config file as cmd arg

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-21 16:10:09 +02:00
Toni Uhlig
9969f955dc Updated ReadMe's, ToDo's and ChangeLog.
* 1.7-release

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-21 15:44:42 +02:00
Toni Uhlig
2c771c54b0 Merge commit 'fb1dcc71de39e6dd5c11b8bc4288ec5e618fa946' 2024-10-17 12:16:40 +02:00
Toni Uhlig
fb1dcc71de Squashed 'dependencies/jsmn/' changes from 1aa2e8f8..25647e69
25647e69 Fix position of a comment in string parsing

git-subtree-dir: dependencies/jsmn
git-subtree-split: 25647e692c7906b96ffd2b05ca54c097948e879c
2024-10-17 12:16:40 +02:00
Toni Uhlig
071a9bcb91 Merge commit '9a14454d3c5589373253571cee7428c593adefd9' 2024-10-17 12:16:20 +02:00
Toni Uhlig
9a14454d3c Squashed 'dependencies/uthash/' changes from bf152630..f69112c0
f69112c0 utarray: Fix typo in docs
619fe95c Fix MSVC warning C4127 in HASH_BLOOM_TEST (#261)
eeba1961 uthash: Improve the docs for HASH_ADD_INORDER
ca98384c HASH_DEL should be able to delete a const-qualified node
095425f7 utlist: Add one more assertion in DL_DELETE2
399bf74b utarray: Stop making `oom` a synonym for `utarray_oom`
85bf75ab utarray_str_cpy: Remove strdup; utarray_oom() if strdup fails.
1a53f304 GitHub CI: Also test building the docs (#248)
4d01591e The MCST Elbrus C Compiler supports __typeof. (#247)
1e0baf06 CI: Add GitHub Actions CI
8844b529 Update test57.c per a suggestion by @mark-summerfield
44a66fe8 Update http:// URLs to https://, and copyright dates to 2022. NFC.

git-subtree-dir: dependencies/uthash
git-subtree-split: f69112c04f1b6e059b8071cb391a1fcc83791a00
2024-10-17 12:16:20 +02:00
Toni Uhlig
f9d9849300 Updated Grafana dashboard to make correct use of gauge max values.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-17 12:14:28 +02:00
Toni
efed6f196e Read and parse configuration files. Fixes #41. (#42)
Read and parse configuration files. Fixes #41.

 * supports nDPId / nDPIsrvd via command line parameter `-f`
 * nDPId: read general/tuning and libnDPI settings
 * support for settings risk domains libnDPI option via config file or via `-R` (Fixes #45, thanks to @UnveilTech)
 * added some documentation in the config file
 * adjusted Systemd and Debian packaging to make use of config files

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-16 14:13:55 +02:00
Naix
3e2ce661f0 Added Filebeat Configuration (#44)
Added Filebeat Configuration

Co-authored-by: Toni <matzeton@googlemail.com>
2024-10-06 11:09:54 +02:00
Toni Uhlig
76e1ea0598 Updated Grafana dashboard.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-02 19:29:14 +02:00
Toni Uhlig
0e792ba301 Generate global stats with microseconds precision.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-10-01 11:58:39 +02:00
Toni Uhlig
9ef17b7bd8 Added some static assertion based sanity checks.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-11 13:28:20 +02:00
Toni Uhlig
1c9aa85485 Save hostname after detection finished for later use within analyse/end/idle flow events. Fixes #39.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-11 13:01:23 +02:00
Toni Uhlig
aef9d629f0 bump libnDPI to 92507c014626bc542f2ab11c729742802c0bc345
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-09 09:29:08 +02:00
Toni Uhlig
f97b3880b6 CI: Set nDPI minimum required version to 4.10
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-03 13:58:44 +02:00
Toni Uhlig
c55429c131 Updated flow event schema with risk names/severites.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-03 13:56:15 +02:00
Toni Uhlig
7bebd7b2c7 Fix OpenWrt package build.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-09-02 17:51:38 +02:00
Toni Uhlig
335708d3e3 Extend flow JSON schema with more properties from nDPI JSON serializer.
* unfortunately, JSON schema definitions could not be used to make this easier to read and maintain

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-22 17:36:59 +02:00
Toni Uhlig
2a0161c1bb Fix CI.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-20 12:17:25 +02:00
Toni Uhlig
adb8fe96f5 CMake: add coverage-clean target and fix coverage dependency issue.
* improve/fix README

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-20 11:49:38 +02:00
Toni Uhlig
4efe7e43a2 Improved installation instructions. Fixes #40.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-19 18:39:35 +02:00
Toni
5e4005162b Add PF_RING support. (#38) 2024-08-19 18:33:18 +02:00
Toni Uhlig
a230eaf061 Improved Keras Autoencoder hyper parameter.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-16 13:20:35 +02:00
Toni Uhlig
68e0c1f280 Fix SonarCloud complaint.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-16 13:19:13 +02:00
Toni Uhlig
8271f15e25 Fixed build error due to missing nDPI includes.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-16 13:14:21 +02:00
Toni Uhlig
f6f3a4daab Extended analyse application to write global stats to a CSV.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-16 12:33:46 +02:00
Toni Uhlig
762e6d36bf Some small fixes.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-08-09 11:09:39 +02:00
Toni Uhlig
930aaf9276 Added global (heap) memory stats for daemon status events.
* added new CMake option `ENABLE_MEMORY_STATUS` to restore the old behavior
   (and increase performance)
 * splitted `ENABLE_MEMORY_PROFILING` into `ENABLE_MEMORY_STATUS` and `ENABLE_MEMORY_PROFILING`

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-06-19 14:25:42 +02:00
Toni Uhlig
165b18c829 Fixed OpenWrt nDPId-testing build.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-06-12 15:07:17 +02:00
dependabot[bot]
1fbfd46fe8 Bump werkzeug from 3.0.1 to 3.0.3 in /examples/py-flow-dashboard (#37)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.1...3.0.3)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-29 14:03:11 +02:00
Toni Uhlig
5290f76b5f flow-info.py: Set min risk severity required to print a risk.
* ReadMe update

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-05-08 00:25:31 +02:00
Toni Uhlig
f4d0f80711 CI: don't run systemd integration test on mac
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-05-07 09:42:30 +02:00
Toni Uhlig
187ebeb4df CI: add DYLD_LIBRARY_PATH to env (mac/unix)
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-05-07 09:27:46 +02:00
Toni Uhlig
71d2fcc491 CMake: set MacOS RPATH
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-05-06 09:57:00 +02:00
Toni Uhlig
86aaf0e808 Workaround for fixing GitHub runners on macOS
* See: https://github.com/ntop/nDPI/pull/2411

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-05-06 09:41:09 +02:00
Toni Uhlig
e822bb6145 Fix OpenWrt builds.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-17 09:25:37 +02:00
Toni Uhlig
4c91038274 Removed unmaintained C JSON dumper.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-17 01:47:31 +02:00
Toni Uhlig
53126a0af9 bump libnDPI to 142c8f5afb90629762920db6703831826513e00b
* fixed `git format` hash length

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-10 16:06:53 +02:00
Toni Uhlig
15608bb571 bump libnDPI to 09bb383437c11ef55e926ed15cdf986c0d426827
* fixed "unused function" warning in `ndpi_bitmap64_fuse.c`

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-04 21:13:33 +02:00
Toni Uhlig
e93a4c9a81 bump libnDPI to df29e12f5efbe84306c1ee7c011a197caec6de50
* fixed "unused function" warning in `roaring.h`

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-04 19:33:34 +02:00
Toni Uhlig
b46f15de03 bump libnDPI to 6e61368cd609899048560405ad792705fffb1f1a
* fixed "unused function" warning in `gcrypt_light.c`

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-04 14:08:34 +02:00
Toni Uhlig
c7eace426c bump libnDPI to 9185c2ccc402d3368fc28ac90ab281b4f951719e
* incorporated API changes from 41eef9246c6a3055e3876e3dd7aeaadecb4b76c0

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-04-04 11:49:48 +02:00
Toni Uhlig
33560d64d2 Fix example build error if memory profiling enabled.
* CI: build against libnDPI with `-DNDPI_NO_PKGCONFIG=ON` and `-DSTATIC_LIBNDPI_INSTALLDIR=/usr`
 * CI: `ENABLE_DBUS=ON` for most builds

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-21 07:26:22 +01:00
Toni Uhlig
675640b0e6 Fixed libpcre2 build.
* CI: build against libpcre2 / libmaxminddb

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-20 14:55:09 +01:00
Toni Uhlig
5e5f268b3c Build against nDPI dev branch tarball if there is a new release required to build nDPId.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-16 18:45:11 +01:00
Toni Uhlig
7ef7667da3 Fix random sanitizer crashes caused by high-entropy ASLR on Ubuntu Github Runner.
* removed arch condition (c&p mistake)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-15 06:57:38 +01:00
Toni Uhlig
d43a3d1436 Fix random sanitizer crashes caused by high-entropy ASLR on Ubuntu Github Runner.
* See: https://github.com/actions/runner-images/issues/9491

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-14 18:26:31 +01:00
Toni Uhlig
b6e4162116 Extend CI pipeline build and test.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-03-07 17:46:31 +01:00
Toni Uhlig
717d66b0e7 Fixed missing statistics updating for unknown mapping keys in collectd/influxd.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-20 23:16:31 +01:00
Toni Uhlig
791b27219d CI maintenance
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-13 11:26:58 +01:00
Toni Uhlig
a487e53015 Added missing influxd test results.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-13 10:50:51 +01:00
Toni Uhlig
aeb6e6f536 Enable CURL in the CI.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-13 10:44:45 +01:00
Toni Uhlig
8af37b3770 Fix some SonarCloud complaints.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-13 07:21:47 +01:00
Toni Uhlig
8949ba39e6 Added test mode for influx push daemon.
* required for regression testing
 * added new confidence value (match by custom rule)
 * updated / tweaked grafana exported dashboard

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-08 01:01:35 +01:00
Toni Uhlig
ea968180a2 Read Ipv6 address and netmask using getifaddrs() instead of reading /proc/net/if_inet6.
* fixes a compatibility issue with Mac OSX

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-07 14:25:14 +01:00
Toni Uhlig
556025b34d Removed API version macro check as it's inconsistent on different platforms.
* set min required nDPI version to 4.9.0

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-06 10:49:47 +01:00
Toni Uhlig
feb2583ef6 bump libnDPI to 4543385d107fcc5a7e8632e35d9a60bcc40cb4f4
* incorporated API changes from nDPI

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-02-06 10:34:52 +01:00
Toni Uhlig
7368f222db Fixed broken "not-detected" event/packet capture in captured example.
* aligned it with influxd example

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-01-29 18:11:34 +01:00
Toni Uhlig
a007a907da Fixed invalid flow risk aggregation in collectd/influxd examples.
* CI: build single nDPId executable with `-Wall -Wextra -std=gnu99`
 * fixed missing error events in influxd example
 * added additional test cases for collectd
 * extended grafana dashboard

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-01-06 19:32:47 +01:00
Toni Uhlig
876aef98e1 Improved collectd example.
* similiar behavior to influxd example
 * gauges and counters are now handled properly

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2024-01-05 11:26:53 +01:00
Toni Uhlig
88cf57a16f Added Grafana example dashboard image.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-20 19:19:08 +01:00
Toni Uhlig
7e81f5b1b7 Added Grafana nDPId dashboard.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-16 15:20:27 +01:00
Toni Uhlig
8acf2d7273 Improved InfluxDB push daemon.
* added proper gauge handling that enables pushing data w/o missing out
   anything e.g. short flows with a lifetime in-between two InfluxDB intervals

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-15 09:16:28 +01:00
Toni Uhlig
71d933b0cd Fixed an event issue.
* a "detection-update" event was thrown even if nothing changed
 * in some cases "not-detected" events were spammed if detection not completed
 * tell `libnDPI` how many packets per flow we want to dissect
 * `nDPId-test` validates total active flows in the right way

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-14 15:45:08 +01:00
Toni Uhlig
fbe07fd882 Improved InfluxDB push daemon.
* fixed severity parsing and gauge handling
 * added flow state gauges
 * flow related gauges are only increased/decreased if a "new" event was seen (except for bytes xfer)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-14 15:38:38 +01:00
Toni Uhlig
5432b06665 Improved InfluxDB push daemon.
* fixed missing flow active gauge
 * fixed invalid flow risk severity gauges
 * fixed missing flow risk gauges

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-11 23:14:00 +01:00
Toni Uhlig
142a435bf6 Add InfluxDB push daemon.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-07 10:00:25 +01:00
Toni Uhlig
f5c5bc88a7 Replaced ambiguous naming of "JSON string" to more accurate "JSON message". #2
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-04 13:13:05 +01:00
Toni Uhlig
53d8a28582 Replaced ambiguous naming of "JSON string" to more accurate "JSON message".
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-04 13:01:27 +01:00
Toni Uhlig
37f3770e3e Improved zlib compression ratio.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-01 06:43:39 +01:00
Toni Uhlig
7368d34d8d c-collectd: Fixed missing escape char.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-01 06:43:39 +01:00
Toni Uhlig
ff77bab398 Warn about unused return values that are quite important.
* CI: ArchLinux build should now instrument `-Werror`
 * CI: Increased OpenWrt build verbosity

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-01 06:43:39 +01:00
Toni Uhlig
d274a06176 flow-info.py: Do not print any information if a flow is "empty" meaning no L4 payload seen so far.
* added JsonDecodeError to provide more information if builtin JSON decoder fails

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-12-01 06:43:39 +01:00
Paul Donald
a5dcc17396 Update README.md (#32)
Sp/gr. 

Co-authored-by: Toni <matzeton@googlemail.com>
2023-11-27 09:08:25 +01:00
3900 changed files with 237463 additions and 82039 deletions

View File

@@ -15,8 +15,10 @@ on:
jobs:
build:
runs-on: ubuntu-latest
env:
CMAKE_C_FLAGS: -Werror
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: false
fetch-depth: 1
@@ -30,7 +32,7 @@ jobs:
target: 'pkgbuild'
pkgname: 'packages/ndpid-testing'
- name: Upload PKG
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nDPId-archlinux-packages
path: packages/ndpid-testing/*.pkg.tar.zst

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
container: 'centos:8'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: false
fetch-depth: 1
@@ -45,12 +45,12 @@ jobs:
run: |
cd ./build && cpack -G RPM && cd ..
- name: Upload RPM
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nDPId-centos-packages
path: build/*.rpm
- name: Upload on Failure
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: failure()
with:
name: autoconf-config-log

39
.github/workflows/build-freebsd.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: FreeBSD Build
on:
schedule:
# At the end of every day
- cron: '0 0 * * *'
push:
branches:
- main
- tmp
pull_request:
branches:
- main
types: [opened, synchronize, reopened]
release:
types: [created]
jobs:
test:
runs-on: ubuntu-latest
name: Build and Test
steps:
- uses: actions/checkout@v4
- name: Test in FreeBSD
id: test
uses: vmactions/freebsd-vm@main
with:
usesh: true
prepare: |
pkg install -y bash autoconf automake cmake gmake libtool gettext pkgconf gcc \
git wget unzip flock \
json-c flex bison libpcap curl openssl dbus
run: |
echo "Working Directory: $(pwd)"
echo "User.............: $(whoami)"
echo "FreeBSD Version..: $(freebsd-version)"
# TODO: Make examples I/O event agnostic i.e. use nio
cmake -S . -B build -DBUILD_NDPI=ON -DBUILD_EXAMPLES=OFF #-DENABLE_CURL=ON -DENABLE_DBUS=ON
cmake --build build

View File

@@ -1,6 +1,9 @@
name: OpenWrt Build
on:
schedule:
# At the end of every day
- cron: '0 0 * * *'
push:
branches:
- main
@@ -23,29 +26,14 @@ jobs:
- arch: arm_cortex-a9_vfpv3-d16
target: mvebu-cortexa9
- arch: mips_24kc
target: ath79-generic
- arch: mipsel_24kc
target: mt7621
- arch: powerpc_464fp
target: apm821xx-nand
- arch: aarch64_cortex-a53
target: mvebu-cortexa53
- arch: arm_cortex-a15_neon-vfpv4
target: armvirt-32
- arch: i386_pentium-mmx
target: x86-geode
- arch: x86_64
target: x86-64
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: false
fetch-depth: 1
@@ -57,9 +45,10 @@ jobs:
FEED_DIR: ${{ github.workspace }}/packages/openwrt
FEEDNAME: ndpid_openwrt_packages_ci
PACKAGES: nDPId-testing
V: s
- name: Store packages
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nDPId-${{ matrix.arch}}-${{ matrix.target }}
path: bin/packages/${{ matrix.arch }}/ndpid_openwrt_packages_ci/*.ipk

45
.github/workflows/build-rpm.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: RPM Build
on:
schedule:
# At the end of every day
- cron: '0 0 * * *'
push:
branches:
- main
- tmp
pull_request:
branches:
- main
types: [opened, synchronize, reopened]
release:
types: [created]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Ubuntu Prerequisites
run: |
sudo apt-get update
sudo apt-get install fakeroot alien autoconf automake cmake libtool pkg-config gettext libjson-c-dev flex bison libpcap-dev zlib1g-dev libcurl4-openssl-dev libdbus-1-dev
- name: Build RPM package
run: |
cmake -S . -B build-rpm -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build-rpm --parallel
cd build-rpm
cpack -G RPM
cd ..
- name: Convert/Install RPM package
run: |
fakeroot alien --scripts --to-deb --verbose ./build-rpm/nDPId-*.rpm
sudo dpkg -i ./ndpid_*.deb
- name: Upload RPM
uses: actions/upload-artifact@v4
with:
name: nDPId-rpm-packages
path: build-rpm/*.rpm

View File

@@ -1,6 +1,9 @@
name: Build
on:
schedule:
# At the end of every day
- cron: '0 0 * * *'
push:
branches:
- main
@@ -21,6 +24,7 @@ jobs:
CMAKE_C_FLAGS: -Werror ${{ matrix.cflags }}
CMAKE_C_EXE_LINKER_FLAGS: ${{ matrix.ldflags }}
CMAKE_MODULE_LINKER_FLAGS: ${{ matrix.ldflags }}
DYLD_LIBRARY_PATH: /usr/local/lib
strategy:
fail-fast: true
matrix:
@@ -29,105 +33,123 @@ jobs:
os: "ubuntu-latest"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: "-DBUILD_RUST_EXAMPLES=ON"
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: "-DENABLE_CRYPTO=ON"
sanitizer: "-DENABLE_SANITIZER=OFF -DENABLE_SANITIZER_THREAD=OFF"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=OFF"
upload: true
upload_suffix: ""
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "gcc"
os: "ubuntu-latest"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=ON"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: "-DENABLE_CRYPTO=ON -DNDPI_WITH_MAXMINDDB=ON -DNDPI_WITH_PCRE=ON -DENABLE_MEMORY_PROFILING=ON"
sanitizer: "-DENABLE_SANITIZER=OFF -DENABLE_SANITIZER_THREAD=OFF"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=OFF"
upload: true
upload_suffix: "-host-gcrypt"
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "clang"
os: "ubuntu-latest"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=OFF"
ndpid_extras: ""
sanitizer: "-DENABLE_SANITIZER=OFF -DENABLE_SANITIZER_THREAD=OFF"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=OFF"
upload: true
upload_suffix: "-no-zlib"
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "gcc"
os: "ubuntu-latest"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: ""
sanitizer: "-DENABLE_SANITIZER=ON"
coverage: "-DENABLE_COVERAGE=ON"
poll: "-DFORCE_POLL=ON"
upload: false
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "clang"
os: "ubuntu-latest"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
sanitizer: "-DENABLE_SANITIZER=ON"
ndpid_extras: "-DENABLE_CRYPTO=ON"
sanitizer: "-DENABLE_SANITIZER=ON"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=OFF"
upload: false
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "clang-12"
os: "ubuntu-latest"
os: "ubuntu-22.04"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: ""
sanitizer: "-DENABLE_SANITIZER_THREAD=ON"
coverage: "-DENABLE_COVERAGE=OFF"
poll:
upload: false
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "gcc-10"
os: "ubuntu-20.04"
os: "ubuntu-22.04"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=OFF"
ndpid_extras: ""
sanitizer: "-DENABLE_SANITIZER=ON"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=ON"
upload: false
ndpi_min_version: "4.8"
- compiler: "gcc-7"
os: "ubuntu-20.04"
ndpi_min_version: "5.0"
- compiler: "gcc-9"
os: "ubuntu-22.04"
ndpi_build: "-DBUILD_NDPI=ON"
ndpid_examples: "-DBUILD_EXAMPLES=ON"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: ""
sanitizer: "-DENABLE_SANITIZER=ON"
coverage: "-DENABLE_COVERAGE=OFF"
poll: "-DFORCE_POLL=OFF"
upload: false
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
- compiler: "cc"
os: "macOS-latest"
os: "macOS-13"
ndpi_build: "-DBUILD_NDPI=OFF"
ndpid_examples: "-DBUILD_EXAMPLES=OFF"
ndpid_rust_examples: ""
ndpid_gcrypt: "-DNDPI_WITH_GCRYPT=OFF"
ndpid_zlib: "-DENABLE_ZLIB=ON"
ndpid_extras: ""
examples: "-DBUILD_EXAMPLES=OFF"
sanitizer: "-DENABLE_SANITIZER=OFF"
coverage: "-DENABLE_COVERAGE=OFF"
poll:
upload: false
ndpi_min_version: "4.8"
ndpi_min_version: "5.0"
steps:
- name: Print Matrix
@@ -141,6 +163,7 @@ jobs:
echo '| nDPI min.: ${{ matrix.ndpi_min_version }}'
echo '| GCRYPT...: ${{ matrix.ndpid_gcrypt }}'
echo '| ZLIB.....: ${{ matrix.ndpid_zlib }}'
echo '| Extras...: ${{ matrix.ndpid_extras }}'
echo '| ForcePoll: ${{ matrix.poll }}'
echo '|---------------------------------------'
echo '| SANITIZER: ${{ matrix.sanitizer }}'
@@ -148,14 +171,14 @@ jobs:
echo '|---------------------------------------'
echo '| UPLOAD...: ${{ matrix.upload }}'
echo '----------------------------------------'
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: false
fetch-depth: 1
- name: Install MacOS Prerequisites
if: startsWith(matrix.os, 'macOS')
run: |
brew install coreutils flock automake make unzip
brew install coreutils automake make unzip
wget 'https://www.tcpdump.org/release/libpcap-1.10.4.tar.gz'
tar -xzvf libpcap-1.10.4.tar.gz
cd libpcap-1.10.4
@@ -164,21 +187,40 @@ jobs:
wget 'https://github.com/ntop/nDPI/archive/refs/heads/dev.zip' -O libndpi-dev.zip
unzip libndpi-dev.zip
cd nDPI-dev
./autogen.sh --prefix=/usr/local --with-only-libndpi && make install
./autogen.sh
./configure --prefix=/usr/local --with-only-libndpi && make install
- name: Fix kernel mmap rnd bits on Ubuntu
if: startsWith(matrix.os, 'ubuntu')
run: |
# Workaround for compatinility between latest kernel and sanitizer
# See https://github.com/actions/runner-images/issues/9491
sudo sysctl vm.mmap_rnd_bits=28
- name: Install Ubuntu Prerequisites
if: startsWith(matrix.os, 'ubuntu')
run: |
sudo apt-get update
sudo apt-get install autoconf automake cmake libtool pkg-config gettext libjson-c-dev flex bison libpcap-dev zlib1g-dev
sudo apt-get install autoconf automake cmake libtool pkg-config gettext libjson-c-dev flex bison libpcap-dev zlib1g-dev libcurl4-openssl-dev libdbus-1-dev
sudo apt-get install ${{ matrix.compiler }} lcov iproute2
- name: Install Ubuntu Prerequisites (Rust/Cargo)
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_rust_examples, '-DBUILD_RUST_EXAMPLES=ON')
run: |
sudo apt-get install cargo
- name: Install Ubuntu Prerequisites (libgcrypt)
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=ON')
run: |
sudo apt-get install libgcrypt20-dev
- name: Install Ubuntu Prerequisities (zlib)
- name: Install Ubuntu Prerequisites (zlib)
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
run: |
sudo apt-get install zlib1g-dev
- name: Install Ubuntu Prerequisites (libmaxminddb, libpcre2)
if: startsWith(matrix.ndpid_extras, '-D')
run: |
sudo apt-get install libmaxminddb-dev libpcre2-dev
- name: Install Ubuntu Prerequisites (libnl-genl-3-dev)
if: startsWith(matrix.ndpi_build, '-DBUILD_NDPI=ON') && startsWith(matrix.coverage, '-DENABLE_COVERAGE=OFF') && startsWith(matrix.sanitizer, '-DENABLE_SANITIZER=ON') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
run: |
sudo apt-get install libnl-genl-3-dev
- name: Checking Network Buffer Size
run: |
C_VAL=$(cat config.h | sed -n 's/^#define\s\+NETWORK_BUFFER_MAX_SIZE\s\+\([0-9]\+\).*$/\1/gp')
@@ -186,17 +228,37 @@ jobs:
test ${C_VAL} = ${PY_VAL}
- name: Configure nDPId
run: |
cmake -S . -B build -DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" -DCMAKE_MODULE_LINKER_FLAGS="$CMAKE_MODULE_LINKER_FLAGS" -DCMAKE_C_EXE_LINKER_FLAGS="$CMAKE_C_EXE_LINKER_FLAGS" \
-DENABLE_SYSTEMD=ON \
cmake -S . -B build -Werror=dev -Werror=deprecated -DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" -DCMAKE_MODULE_LINKER_FLAGS="$CMAKE_MODULE_LINKER_FLAGS" -DCMAKE_C_EXE_LINKER_FLAGS="$CMAKE_C_EXE_LINKER_FLAGS" \
-DENABLE_DBUS=ON -DENABLE_CURL=ON -DENABLE_SYSTEMD=ON \
${{ matrix.poll }} ${{ matrix.coverage }} ${{ matrix.sanitizer }} ${{ matrix.ndpi_build }} \
${{ matrix.ndpid_examples }} ${{ matrix.ndpid_zlib }} ${{ matrix.ndpid_gcrypt }}
${{ matrix.ndpid_examples }} ${{ matrix.ndpid_rust_examples }} ${{ matrix.ndpid_zlib }} ${{ matrix.ndpid_gcrypt }} ${{ matrix.ndpid_extras }}
- name: Build nDPId
run: |
cmake --build build --verbose
- name: Build single nDPId executable (invoke CC directly)
if: (endsWith(matrix.compiler, 'gcc') || endsWith(matrix.compiler, 'clang')) && startsWith(matrix.coverage, '-DENABLE_COVERAGE=OFF') && startsWith(matrix.sanitizer, '-DENABLE_SANITIZER=ON') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
- name: Build single nDPId/nDPIsrvd executables (invoke CC directly - dynamic nDPI lib)
if: startsWith(matrix.ndpi_build, '-DBUILD_NDPI=OFF') && startsWith(matrix.coverage, '-DENABLE_COVERAGE=OFF') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF')
run: |
cc -fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=leak nDPId.c nio.c utils.c -I./build/libnDPI/include/ndpi -I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include -o /tmp/a.out -lpcap ./build/libnDPI/lib/libndpi.a -pthread -lm -lz
pkg-config --cflags --libs libndpi
cc -Wall -Wextra -std=gnu99 \
${{ matrix.poll }} -DENABLE_MEMORY_PROFILING=1 \
nDPId.c nio.c utils.c \
$(pkg-config --cflags libndpi) -I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include \
-o /tmp/a.out \
-lpcap $(pkg-config --libs libndpi) -pthread -lm
cc -Wall -Wextra -std=gnu99 \
${{ matrix.poll }} -DENABLE_MEMORY_PROFILING=1 \
nDPIsrvd.c nio.c utils.c \
-I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include \
-o /tmp/a.out
- name: Build single nDPId/nDPIsrvd executables (invoke CC directly - static nDPI lib)
if: startsWith(matrix.ndpi_build, '-DBUILD_NDPI=ON') && startsWith(matrix.coverage, '-DENABLE_COVERAGE=OFF') && startsWith(matrix.sanitizer, '-DENABLE_SANITIZER=ON') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
run: |
cc -Wall -Wextra -std=gnu99 ${{ matrix.poll }} -DENABLE_ZLIB=1 -DENABLE_MEMORY_PROFILING=1 \
-fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=leak \
nDPId.c nio.c utils.c \
-I./build/libnDPI/include/ndpi -I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include \
-o /tmp/a.out \
-lpcap ./build/libnDPI/lib/libndpi.a -pthread -lm -lz
- name: Test EXEC
run: |
./build/nDPId-test
@@ -206,8 +268,9 @@ jobs:
if: startsWith(matrix.os, 'macOS') == false && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF')
run: |
./test/run_tests.sh ./libnDPI ./build/nDPId-test
./test/run_config_tests.sh ./libnDPI ./build/nDPId-test
- name: Daemon
if: endsWith(matrix.compiler, 'gcc') || endsWith(matrix.compiler, 'clang')
if: startsWith(matrix.compiler, 'gcc') || endsWith(matrix.compiler, 'clang')
run: |
make -C ./build daemon VERBOSE=1
make -C ./build daemon VERBOSE=1
@@ -219,13 +282,27 @@ jobs:
if: startsWith(matrix.os, 'macOS') == false && matrix.upload == false
run: |
make -C ./build dist
RAND_ID=$(( ( RANDOM ) + 1 ))
mkdir "nDPId-dist-${RAND_ID}"
cd "nDPId-dist-${RAND_ID}"
tar -xjf ../nDPId-*.tar.bz2
cd ./nDPId-*
cmake -S . -B ./build \
-DENABLE_DBUS=ON -DENABLE_CURL=ON -DENABLE_SYSTEMD=ON \
${{ matrix.poll }} ${{ matrix.coverage }} ${{ matrix.sanitizer }} ${{ matrix.ndpi_build }} \
${{ matrix.ndpid_examples }} ${{ matrix.ndpid_rust_examples }} ${{ matrix.ndpid_zlib }} ${{ matrix.ndpid_gcrypt }} ${{ matrix.ndpid_extras }}
cd ../..
rm -rf "nDPId-dist-${RAND_ID}"
- name: CPack DEB
if: startsWith(matrix.os, 'macOS') == false
run: |
cd ./build && cpack -G DEB && sudo dpkg -i nDPId-*.deb && cd ..
cd ./build && cpack -G DEB && \
sudo dpkg -i nDPId-*.deb && \
sudo apt purge ndpid && \
sudo dpkg -i nDPId-*.deb && cd ..
- name: Upload DEB
if: startsWith(matrix.os, 'macOS') == false && matrix.upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: nDPId-debian-packages_${{ matrix.compiler }}${{ matrix.upload_suffix }}
path: build/*.deb
@@ -237,26 +314,96 @@ jobs:
sudo systemctl enable ndpid@lo
sudo systemctl start ndpid@lo
SYSTEMCTL_RET=3; while (( $SYSTEMCTL_RET == 3 )); do systemctl is-active ndpid@lo.service; SYSTEMCTL_RET=$?; sleep 1; done
sudo systemctl status ndpisrvd.service ndpid@lo.service || true
sudo systemctl show ndpisrvd.service ndpid@lo.service -p SubState,ActiveState || true
sudo systemctl status ndpisrvd.service ndpid@lo.service
sudo systemctl show ndpisrvd.service ndpid@lo.service -p SubState,ActiveState
sudo dpkg -i ./build/nDPId-*.deb
sudo systemctl status ndpisrvd.service ndpid@lo.service
sudo systemctl show ndpisrvd.service ndpid@lo.service -p SubState,ActiveState
sudo systemctl stop ndpisrvd.service
journalctl --no-tail --no-pager -u ndpisrvd.service -u ndpid@lo.service
- name: Build PF_RING and nDPId (invoke CC directly - dynamic nDPI lib)
if: startsWith(matrix.ndpi_build, '-DBUILD_NDPI=ON') && startsWith(matrix.coverage, '-DENABLE_COVERAGE=OFF') && startsWith(matrix.sanitizer, '-DENABLE_SANITIZER=ON') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
run: |
git clone --depth=1 https://github.com/ntop/PF_RING.git
cd PF_RING/userland && ./configure && make && sudo make install prefix=/usr
cd ../..
cc -Wall -Wextra -std=gnu99 ${{ matrix.poll }} -DENABLE_PFRING=1 -DENABLE_ZLIB=1 -DENABLE_MEMORY_PROFILING=1 \
-fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=leak \
nDPId.c npfring.c nio.c utils.c \
-I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include \
-I./build/libnDPI/include/ndpi \
-I./PF_RING/userland/lib -I./PF_RING/kernel \
-o /tmp/a.out \
-ldl /usr/lib/libpfring.a -lpcap ./build/libnDPI/lib/libndpi.a -pthread -lm -lz
- name: Build against libnDPI-${{ matrix.ndpi_min_version }}
if: matrix.upload == false && startsWith(matrix.os, 'ubuntu')
if: startsWith(matrix.os, 'ubuntu')
run: |
mkdir build-local-ndpi && cd build-local-ndpi
WGET_RET=0
wget 'https://github.com/ntop/nDPI/archive/refs/tags/${{ matrix.ndpi_min_version }}.tar.gz' || { WGET_RET=$?; true; }
echo "wget returned: ${WGET_RET}"
test $WGET_RET -ne 8 || echo "::warning file=nDPId.c::New libnDPI release required to build against release tarball."
test $WGET_RET -ne 0 || { tar -xzvf ${{ matrix.ndpi_min_version }}.tar.gz && \
cd nDPI-${{ matrix.ndpi_min_version }} && \
./autogen.sh --prefix=/usr --with-only-libndpi CC="${{ matrix.compiler }}" CXX=false \
test $WGET_RET -ne 8 && { \
tar -xzvf ${{ matrix.ndpi_min_version }}.tar.gz; }
test $WGET_RET -ne 8 || { \
echo "::warning file=nDPId.c::New libnDPI release required to build against release tarball, falling back to dev branch."; \
wget 'http://github.com/ntop/nDPI/archive/refs/heads/dev.tar.gz'; \
WGET_RET=$?; \
tar -xzvf dev.tar.gz; \
mv -v 'nDPI-dev' 'nDPI-${{ matrix.ndpi_min_version }}'; }
test $WGET_RET -ne 0 || { cd nDPI-${{ matrix.ndpi_min_version }}; \
NDPI_CONFIGURE_ARGS=''; \
test 'x${{ matrix.ndpid_gcrypt }}' != 'x-DNDPI_WITH_GCRYPT=ON' || NDPI_CONFIGURE_ARGS="$NDPI_CONFIGURE_ARGS --with-local-libgcrypt"; \
test 'x${{ matrix.sanitizer }}' != 'x-DENABLE_SANITIZER=ON' || NDPI_CONFIGURE_ARGS="$NDPI_CONFIGURE_ARGS --with-sanitizer"; \
echo "Configure arguments: '$NDPI_CONFIGURE_ARGS'"; \
./autogen.sh; \
./configure --prefix=/usr --with-only-libndpi $NDPI_CONFIGURE_ARGS CC="${{ matrix.compiler }}" CXX=false \
CFLAGS="$CMAKE_C_FLAGS" && make && sudo make install; cd ..; }
test $WGET_RET -ne 0 || { echo "::info file=CMakeLists.txt::Running CMake.."; \
cmake -S .. -DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" \
ls -alhR /usr/include/ndpi
cd ..
test $WGET_RET -ne 0 || { echo "Running CMake.. (pkgconfig)"; \
cmake -S . -B ./build-local-pkgconfig \
-DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" \
-DCMAKE_C_EXE_LINKER_FLAGS="$CMAKE_C_EXE_LINKER_FLAGS" \
-DBUILD_NDPI=OFF -DENABLE_SANITIZER=OFF \
-DBUILD_NDPI=OFF -DBUILD_EXAMPLES=ON \
-DENABLE_DBUS=ON -DENABLE_CURL=ON -DENABLE_SYSTEMD=ON \
${{ matrix.poll }} ${{ matrix.coverage }} \
${{ matrix.ndpid_examples }}; }
test $WGET_RET -ne 0 || { echo "::info file=CMakeLists.txt:Running Make.."; cmake --build . --verbose; }
${{ matrix.sanitizer }} ${{ matrix.ndpid_examples }} ${{ matrix.ndpid_rust_examples }}; }
test $WGET_RET -ne 0 || { echo "Running Make.. (pkgconfig)"; \
cmake --build ./build-local-pkgconfig --verbose; }
test $WGET_RET -ne 0 || { echo "Testing Executable.. (pkgconfig)"; \
./build-local-pkgconfig/nDPId-test; \
./build-local-pkgconfig/nDPId -h || test $? -eq 1; \
./build-local-pkgconfig/nDPIsrvd -h || test $? -eq 1; }
test $WGET_RET -ne 0 || { echo "Running CMake.. (static)"; \
cmake -S . -B ./build-local-static \
-DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" \
-DCMAKE_C_EXE_LINKER_FLAGS="$CMAKE_C_EXE_LINKER_FLAGS" \
-DBUILD_NDPI=OFF -DBUILD_EXAMPLES=ON \
-DENABLE_DBUS=ON -DENABLE_CURL=ON -DENABLE_SYSTEMD=ON \
-DNDPI_NO_PKGCONFIG=ON -DSTATIC_LIBNDPI_INSTALLDIR=/usr \
${{ matrix.poll }} ${{ matrix.coverage }} ${{ matrix.ndpid_gcrypt }} \
${{ matrix.sanitizer }} ${{ matrix.ndpid_examples }} ${{ matrix.ndpid_rust_examples }}; }
test $WGET_RET -ne 0 || { echo "Running Make.. (static)"; \
cmake --build ./build-local-static --verbose; }
test $WGET_RET -ne 0 || { echo "Testing Executable.. (static)"; \
./build-local-static/nDPId-test; \
./build-local-static/nDPId -h || test $? -eq 1; \
./build-local-static/nDPIsrvd -h || test $? -eq 1; }
test $WGET_RET -ne 0 || test ! -d ./PF_RING || { echo "Running CMake.. (PF_RING)"; \
cmake -S . -B ./build-local-pfring \
-DCMAKE_C_COMPILER="$CMAKE_C_COMPILER" -DCMAKE_C_FLAGS="$CMAKE_C_FLAGS" \
-DCMAKE_C_EXE_LINKER_FLAGS="$CMAKE_C_EXE_LINKER_FLAGS" \
-DBUILD_NDPI=OFF -DBUILD_EXAMPLES=ON -DENABLE_PFRING=ON \
-DENABLE_DBUS=ON -DENABLE_CURL=ON -DENABLE_SYSTEMD=ON \
-DNDPI_NO_PKGCONFIG=ON -DSTATIC_LIBNDPI_INSTALLDIR=/usr \
-DPFRING_LINK_STATIC=OFF \
-DPFRING_INSTALLDIR=/usr -DPFRING_KERNEL_INC="$(realpath ./PF_RING/kernel)" \
${{ matrix.poll }} ${{ matrix.coverage }} ${{ matrix.ndpid_gcrypt }} \
${{ matrix.sanitizer }} ${{ matrix.ndpid_examples }} ${{ matrix.ndpid_rust_examples }}; }
test $WGET_RET -ne 0 || test ! -d ./PF_RING || { echo "Running Make.. (PF_RING)"; \
cmake --build ./build-local-pfring --verbose; }
test $WGET_RET -ne 0 || test ! -d ./PF_RING || { echo "Testing Executable.. (PF_RING)"; \
./build-local-pfring/nDPId-test; \
./build-local-pfring/nDPId -h || test $? -eq 1; \
./build-local-pfring/nDPIsrvd -h || test $? -eq 1; }
test $WGET_RET -eq 0 -o $WGET_RET -eq 8

View File

@@ -13,49 +13,54 @@ jobs:
env:
BUILD_WRAPPER_OUT_DIR: build_wrapper_output_directory
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
# - uses: actions/checkout@v3
# - name: Set up Python 3.8 for gcovr
# uses: actions/setup-python@v4
# with:
# python-version: 3.8
# - name: install gcovr 5.0
# run: |
# pip install gcovr==5.0 # 5.1 is not supported
- name: Set up Python 3.8 for gcovr
uses: actions/setup-python@v4
with:
python-version: 3.8
- name: install gcovr 5.0
run: |
pip install gcovr==5.0 # 5.1 is not supported
- name: Install sonar-scanner and build-wrapper
uses: SonarSource/sonarcloud-github-c-cpp@v2
uses: SonarSource/sonarcloud-github-c-cpp@v3.2.0
- name: Install Prerequisites
run: |
sudo apt-get update
sudo apt-get install autoconf automake cmake libtool pkg-config gettext libjson-c-dev flex bison libpcap-dev zlib1g-dev
sudo apt-get install autoconf automake cmake lcov \
libtool pkg-config gettext \
libjson-c-dev flex bison \
libcurl4-openssl-dev libpcap-dev zlib1g-dev
- name: Run build-wrapper
run: |
mkdir build
cmake -S . -B build -DBUILD_NDPI=ON -DDENABLE_ZLIB=ON -DNDPI_WITH_GCRYPT=OFF
build-wrapper-linux-x86-64 --out-dir ${{ env.BUILD_WRAPPER_OUT_DIR }} cmake --build build/ --config Release
# - name: Run tests
# run: |
# for file in $(ls libnDPI/tests/cfgs/*/pcap/*.pcap libnDPI/tests/cfgs/*/pcap/*.pcapng libnDPI/tests/cfgs/*/pcap/*.cap); do \
# echo -n "${file} "; \
# ./build/nDPId-test "${file}" >/dev/null 2>/dev/null; \
# echo "[ok]"; \
# done
# - name: Collect coverage into one XML report
# run: |
# gcovr --sonarqube > coverage.xml
build-wrapper-linux-x86-64 --out-dir ${{ env.BUILD_WRAPPER_OUT_DIR }} ./scripts/build-sonarcloud.sh
- name: Run tests
run: |
for file in $(ls libnDPI/tests/cfgs/*/pcap/*.pcap libnDPI/tests/cfgs/*/pcap/*.pcapng libnDPI/tests/cfgs/*/pcap/*.cap); do \
echo -n "${file} "; \
cd ./build-sonarcloud; \
./nDPId-test "../${file}" >/dev/null 2>/dev/null; \
cd ..; \
echo "[ok]"; \
done
mkdir -p gcov_report
cd gcov_report
gcov ../build-sonarcloud/CMakeFiles/nDPId-test.dir/nDPId-test.c.o
cd ..
- name: Run sonar-scanner
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
run: |
sonar-scanner \
--define sonar.projectName=nDPId \
--define sonar.projectVersion=1.7 \
--define sonar.sourceEncoding=UTF-8 \
--define sonar.branch.name=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}} \
--define sonar.cfamily.build-wrapper-output="${{ env.BUILD_WRAPPER_OUT_DIR }}" \
--define sonar.organization=lnslbrty \
--define sonar.projectKey=lnslbrty_nDPId \
--define sonar.exclusions=dependencies/uthash/tests/** \
--define sonar.verbose=true \
--define sonar.python.version=3.8 \
--define sonar.cfamily.gcov.reportsPath=coverage.xml
--define sonar.cfamily.compile-commands=${{ env.BUILD_WRAPPER_OUT_DIR }}/compile_commands.json \
--define sonar.cfamily.gcov.reportsPath=gcov_report \
--define sonar.exclusions=build-sonarcloud/**,libnDPI/**,test/results/**,dependencies/jsmn/**,dependencies/uthash/**,examples/js-rt-analyzer-frontend/**,examples/js-rt-analyzer/**,examples/c-collectd/www/**,examples/py-flow-dashboard/assets/**

View File

@@ -3,6 +3,9 @@ image: debian:stable
stages:
- build_and_test
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
before_script:
- export DEBIAN_FRONTEND=noninteractive
- apt-get update -qq
@@ -67,7 +70,7 @@ build_and_test_static_libndpi:
- >
if ldd ./build-cmake-submodule/nDPId | grep -qoEi libndpi; then \
echo 'nDPId linked against a static libnDPI should not contain a shared linked libnDPI.' >&2; false; fi
- cc nDPId.c nio.c utils.c -I./build-cmake-submodule/libnDPI/include/ndpi -I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include -o /tmp/a.out -lpcap ./build-cmake-submodule/libnDPI/lib/libndpi.a -pthread -lm -lz
- cc -Wall -Wextra -std=gnu99 nDPId.c nio.c utils.c -I./build-cmake-submodule/libnDPI/include/ndpi -I. -I./dependencies -I./dependencies/jsmn -I./dependencies/uthash/include -o /tmp/a.out -lpcap ./build-cmake-submodule/libnDPI/lib/libndpi.a -pthread -lm -lz
artifacts:
expire_in: 1 week
paths:
@@ -104,7 +107,8 @@ build_dynamic_libndpi:
# pkg-config dynamic linked build
- git clone https://github.com/ntop/nDPI.git
- cd nDPI
- ./autogen.sh --prefix="$(realpath ../_install)" --enable-option-checking=fatal
- ./autogen.sh
- ./configure --prefix="$(realpath ../_install)" --enable-option-checking=fatal
- make install V=s
- cd ..
- tree ./_install

View File

@@ -1,5 +1,24 @@
# CHANGELOG
#### nDPId 1.7 (Oct 2024)
- Read and parse configuration files for nDPId (+ libnDPI) and nDPIsrvd
- Added loading risk domains from a file (`-R`, thanks to @UnveilTech)
- Added Filebeat configuration file
- Improved hostname handling; will now always be part of `analyse`/`end`/`idle` events (if dissected)
- Improved Documentation (INSTALL / Schema)
- Added PF\_RING support
- Improved nDPIsrvd-analyse to write global stats to a CSV
- Added global (heap) memory stats for daemon status events (if enabled)
- Fixed IPv6 address/netmask retrieval on some systems
- Improved nDPIsrvd-collect; gauges and counters are now handled the right way
- Added nDPId Grafana dashboard
- Fixed `detection-update` event bug; was thrown even if nothing changed
- Fixed `not-detected` event spam if detection not completed (in some rare cases)
- Improved InfluxDB push daemon (severity parsing / gauge handling)
- Improved zLib compression
- Fixed nDPIsrvd-collectd missing escape character
#### nDPId 1.6 (Nov 2023)
- Added Event I/O abstraction layer (supporting only poll/epoll by now)
@@ -17,7 +36,7 @@
- Fixed a bug in base64 encoding which could lead to invalid base64 strings
- Added some machine learning examples
- Fixed various smaller bugs
- Fixed nDPIsrvd bug which causes invalid JSON strings sent to Distributors
- Fixed nDPIsrvd bug which causes invalid JSON messages sent to Distributors
#### nDPId 1.5 (Apr 2022)

View File

@@ -15,22 +15,36 @@ if("${PROJECT_SOURCE_DIR}" STREQUAL "${PROJECT_BINARY_DIR}")
"Please remove ${PROJECT_SOURCE_DIR}/CMakeCache.txt\n"
"and\n"
"${PROJECT_SOURCE_DIR}/CMakeFiles\n"
"Create a build directory somewhere and run CMake again.")
"Create a build directory somewhere and run CMake again.\n"
"Or run: 'cmake -S ${PROJECT_SOURCE_DIR} -B ./your-custom-build-dir [CMAKE-OPTIONS]'")
endif()
set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake)
find_package(PkgConfig REQUIRED)
set(CPACK_PACKAGE_CONTACT "toni@impl.cc")
set(CPACK_DEBIAN_PACKAGE_NAME "nDPId")
set(CPACK_DEBIAN_PACKAGE_SECTION "network")
set(CPACK_DEBIAN_PACKAGE_DESCRIPTION "nDPId is a set of daemons and tools to capture, process and classify network traffic.")
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "Toni Uhlig <toni@impl.cc>")
set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_SOURCE_DIR}/packages/debian/preinst;${CMAKE_SOURCE_DIR}/packages/debian/prerm;${CMAKE_SOURCE_DIR}/packages/debian/postrm")
set(CPACK_DEBIAN_PACKAGE_CONTROL_STRICT_PERMISSION TRUE)
set(CPACK_DEBIAN_PACKAGE_SHLIBDEPS ON)
set(CPACK_DEBIAN_DEBUGINFO_PACKAGE ON)
set(CPACK_RPM_PACKAGE_LICENSE "GPL-3")
set(CPACK_RPM_PACKAGE_VENDOR "Toni Uhlig")
set(CPACK_RPM_PACKAGE_URL "https://www.github.com/utoni/nDPId.git")
set(CPACK_RPM_PACKAGE_DESCRIPTION "nDPId is a set of daemons and tools to capture, process and classify network traffic.")
set(CPACK_RPM_PRE_INSTALL_SCRIPT_FILE "${CMAKE_SOURCE_DIR}/packages/redhat/pre_install")
set(CPACK_RPM_PRE_UNINSTALL_SCRIPT_FILE "${CMAKE_SOURCE_DIR}/packages/redhat/pre_uninstall")
set(CPACK_RPM_POST_UNINSTALL_SCRIPT_FILE "${CMAKE_SOURCE_DIR}/packages/redhat/post_uninstall")
set(CPACK_STRIP_FILES ON)
set(CPACK_PACKAGE_VERSION_MAJOR 1)
set(CPACK_PACKAGE_VERSION_MINOR 6)
set(CPACK_PACKAGE_VERSION_MINOR 7)
set(CPACK_PACKAGE_VERSION_PATCH 0)
# Note: CPACK_PACKAGING_INSTALL_PREFIX and CMAKE_INSTALL_PREFIX are *not* the same.
# It is used only to ease environment file loading via systemd.
set(CPACK_PACKAGING_INSTALL_PREFIX "${CMAKE_INSTALL_PREFIX}")
set(CMAKE_MACOSX_RPATH 1)
include(CPack)
include(CheckFunctionExists)
@@ -80,13 +94,83 @@ option(ENABLE_SANITIZER_THREAD "Enable TSAN (does not work together with ASAN)."
option(ENABLE_MEMORY_PROFILING "Enable dynamic memory tracking." OFF)
option(ENABLE_ZLIB "Enable zlib support for nDPId (experimental)." OFF)
option(ENABLE_SYSTEMD "Install systemd components." OFF)
option(ENABLE_CRYPTO "Enable OpenSSL cryptographic support in nDPId/nDPIsrvd." OFF)
option(BUILD_EXAMPLES "Build C examples." ON)
option(BUILD_RUST_EXAMPLES "Build Rust examples." OFF)
if(BUILD_EXAMPLES)
option(ENABLE_DBUS "Build DBus notification example." OFF)
option(ENABLE_CURL "Build influxdb data write example." OFF)
endif()
option(ENABLE_PFRING "Enable PF_RING support for nDPId (experimental)" OFF)
option(BUILD_NDPI "Clone and build nDPI from github." OFF)
if(BUILD_NDPI)
if(APPLE)
message(WARNING "Building libnDPI from CMake is not supported on Apple and may fail.")
if(ENABLE_PFRING)
option(PFRING_LINK_STATIC "Link against a static version of pfring." ON)
set(PFRING_KERNEL_INC "" CACHE STRING "Path to PFRING kernel module include directory.")
set(PFRING_DEFS "-DENABLE_PFRING=1")
if(PFRING_KERNEL_INC STREQUAL "")
message(FATAL_ERROR "PFRING_KERNEL_INC needs to be set to the PFRING kernel module include directory.")
endif()
if(NOT EXISTS "${PFRING_KERNEL_INC}/linux/pf_ring.h")
message(FATAL_ERROR "Expected to find <linux/pf_ring.h> below ${PFRING_KERNEL_INC}, but none found.")
endif()
set(PFRING_INSTALLDIR "/opt/PF_RING/usr" CACHE STRING "")
set(PFRING_INC "${PFRING_INSTALLDIR}/include")
if(NOT EXISTS "${PFRING_INC}")
message(FATAL_ERROR "Include directory \"${PFRING_INC}\" does not exist!")
endif()
if(PFRING_LINK_STATIC)
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
if(EXISTS "${PFRING_INSTALLDIR}/lib64")
set(STATIC_PFRING_LIB "${PFRING_INSTALLDIR}/lib64/libpfring.a")
else()
set(STATIC_PFRING_LIB "${PFRING_INSTALLDIR}/lib/libpfring.a")
endif()
else()
if(EXISTS "${PFRING_INSTALLDIR}/lib32")
set(STATIC_PFRING_LIB "${PFRING_INSTALLDIR}/lib32/libpfring.a")
else()
set(STATIC_PFRING_LIB "${PFRING_INSTALLDIR}/lib/libpfring.a")
endif()
endif()
if(NOT EXISTS "${STATIC_PFRING_LIB}")
message(FATAL_ERROR "Static library \"${STATIC_PFRING_LIB}\" does not exist!")
endif()
else()
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
if(EXISTS "${PFRING_INSTALLDIR}/lib64")
find_library(PF_RING_LIB pfring PATHS "${PFRING_INSTALLDIR}/lib64")
else()
find_library(PF_RING_LIB pfring PATHS "${PFRING_INSTALLDIR}/lib")
endif()
else()
if(EXISTS "${PFRING_INSTALLDIR}/lib32")
find_library(PF_RING_LIB pfring PATHS "${PFRING_INSTALLDIR}/lib32")
else()
find_library(PF_RING_LIB pfring PATHS "${PFRING_INSTALLDIR}/lib")
endif()
endif()
if(NOT PF_RING_LIB)
message(FATAL_ERROR "libpfring.so not found below ${PFRING_INSTALLDIR}/{lib,lib32,lib64}")
endif()
endif()
if(NOT EXISTS "${PFRING_INSTALLDIR}/include/pfring.h")
message(FATAL_ERROR "Expected to find <include/pfring.h> inside ${PFRING_INSTALLDIR}, but none found.")
endif()
else()
unset(PFRING_INSTALLDIR CACHE)
unset(PFRING_INC CACHE)
unset(STATIC_PFRING_LIB CACHE)
unset(PFRING_LINK_STATIC CACHE)
endif()
if(BUILD_NDPI)
option(BUILD_NDPI_FORCE_GIT_UPDATE "Forcefully instruments nDPI build script to update the git submodule." OFF)
unset(NDPI_NO_PKGCONFIG CACHE)
unset(STATIC_LIBNDPI_INSTALLDIR CACHE)
@@ -112,22 +196,38 @@ else()
unset(NDPI_WITH_MAXMINDDB CACHE)
endif()
add_executable(nDPId nDPId.c nio.c utils.c)
if(ENABLE_PFRING)
set(NDPID_PFRING_SRCS npfring.c)
endif()
if(ENABLE_CRYPTO)
set(CRYPTO_SRCS ncrypt.c)
endif()
add_executable(nDPId nDPId.c ${NDPID_PFRING_SRCS} ${CRYPTO_SRCS} nio.c utils.c)
add_executable(nDPIsrvd nDPIsrvd.c nio.c utils.c)
add_executable(nDPId-test nDPId-test.c)
add_executable(nDPId-test nDPId-test.c ${NDPID_PFRING_SRCS} ${CRYPTO_SRCS})
add_custom_target(umask_check)
add_custom_command(
TARGET umask_check
PRE_BUILD
COMMAND ${CMAKE_SOURCE_DIR}/scripts/umask-check.sh
)
add_dependencies(nDPId umask_check)
add_custom_target(dist)
add_custom_command(
TARGET dist
PRE_BUILD
COMMAND "${CMAKE_SOURCE_DIR}/scripts/make-dist.sh"
)
add_custom_target(daemon)
add_custom_command(
TARGET daemon
TARGET daemon
POST_BUILD
COMMAND env nDPIsrvd_ARGS='-C 1024' "${CMAKE_SOURCE_DIR}/scripts/daemon.sh" "$<TARGET_FILE:nDPId>" "$<TARGET_FILE:nDPIsrvd>"
DEPENDS nDPId nDPIsrvd
)
add_dependencies(daemon nDPId nDPIsrvd)
if(CMAKE_CROSSCOMPILING)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
@@ -159,14 +259,22 @@ if(ENABLE_COVERAGE)
COMMAND genhtml -o "${CMAKE_BINARY_DIR}/coverage_report" "${CMAKE_BINARY_DIR}/lcov.info"
DEPENDS nDPId nDPId-test nDPIsrvd
)
add_custom_target(coverage-clean)
add_custom_command(
TARGET coverage-clean
COMMAND find "${CMAKE_BINARY_DIR}" "${CMAKE_SOURCE_DIR}/libnDPI" -name "*.gcda" -delete
POST_BUILD
)
add_custom_target(coverage-view)
add_custom_command(
TARGET coverage-view
COMMAND cd "${CMAKE_BINARY_DIR}/coverage_report" && python3 -m http.server
DEPENDS "${CMAKE_BINARY_DIR}/coverage_report/nDPId/index.html"
POST_BUILD
)
add_dependencies(coverage-view coverage)
endif()
if(ENABLE_SANITIZER)
# TODO: Check for `-fsanitize-memory-track-origins` and add if available?
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=leak")
endif()
if(ENABLE_SANITIZER_THREAD)
@@ -176,24 +284,35 @@ if(ENABLE_ZLIB)
set(ZLIB_DEFS "-DENABLE_ZLIB=1")
pkg_check_modules(ZLIB REQUIRED zlib)
endif()
if(ENABLE_DBUS)
pkg_check_modules(DBUS REQUIRED dbus-1)
if(BUILD_EXAMPLES)
if(ENABLE_DBUS)
pkg_check_modules(DBUS REQUIRED dbus-1)
endif()
if(ENABLE_CURL)
pkg_check_modules(CURL REQUIRED libcurl)
endif()
endif()
if(NDPI_WITH_GCRYPT)
message(STATUS "nDPI: Enable GCRYPT")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-local-libgcrypt")
endif()
if(NDPI_WITH_PCRE)
message(STATUS "nDPI: Enable PCRE")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-pcre")
endif()
if(NDPI_WITH_MAXMINDDB)
message(STATUS "nDPI: Enable MAXMINDDB")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-maxminddb")
endif()
if(ENABLE_COVERAGE)
message(STATUS "nDPI: Enable Coverage")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --enable-code-coverage")
if(BUILD_NDPI)
if(NDPI_WITH_GCRYPT)
message(STATUS "nDPI: Enable GCRYPT")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-local-libgcrypt")
endif()
if(NDPI_WITH_PCRE)
message(STATUS "nDPI: Enable PCRE")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-pcre2")
endif()
if(NDPI_WITH_MAXMINDDB)
message(STATUS "nDPI: Enable MAXMINDDB")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-maxminddb")
endif()
if(ENABLE_COVERAGE)
message(STATUS "nDPI: Enable Coverage")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --enable-code-coverage")
endif()
if(CMAKE_BUILD_TYPE STREQUAL "Debug" OR CMAKE_BUILD_TYPE STREQUAL "")
message(STATUS "nDPI: Enable Debug Build")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --enable-debug-build --enable-debug-messages")
endif()
endif()
execute_process(
@@ -220,7 +339,9 @@ if(CMAKE_CROSSCOMPILING)
add_definitions("-DCROSS_COMPILATION=1")
endif()
if(ENABLE_MEMORY_PROFILING)
message(WARNING "ENABLE_MEMORY_PROFILING should not be used in production environments.")
if(NOT CMAKE_BUILD_TYPE STREQUAL "Debug" AND NOT CMAKE_BUILD_TYPE STREQUAL "")
message(WARNING "ENABLE_MEMORY_PROFILING should not be used in production environments.")
endif()
add_definitions("-DENABLE_MEMORY_PROFILING=1"
"-Duthash_malloc=nDPIsrvd_uthash_malloc"
"-Duthash_free=nDPIsrvd_uthash_free")
@@ -271,13 +392,19 @@ if(BUILD_NDPI)
add_dependencies(nDPId-test libnDPI)
endif()
if(ENABLE_CRYPTO)
find_package(OpenSSL REQUIRED)
set(OSSL_DEFS "-DENABLE_CRYPTO=1")
set(OSSL_LIBRARY "${OPENSSL_SSL_LIBRARY}" "${OPENSSL_CRYPTO_LIBRARY}")
endif()
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI OR NDPI_NO_PKGCONFIG)
if(NDPI_WITH_GCRYPT)
find_package(GCRYPT "1.4.2" REQUIRED)
endif()
if(NDPI_WITH_PCRE)
pkg_check_modules(PCRE REQUIRED libpcre>=8.39)
pkg_check_modules(PCRE REQUIRED libpcre2-8)
endif()
if(NDPI_WITH_MAXMINDDB)
@@ -289,13 +416,13 @@ if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI)
add_definitions("-DLIBNDPI_STATIC=1")
set(STATIC_LIBNDPI_INC "${STATIC_LIBNDPI_INSTALLDIR}/include/ndpi")
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
if(EXISTS "${STATIC_LIBNDPI_INSTALLDIR}/lib64")
if(EXISTS "${STATIC_LIBNDPI_INSTALLDIR}/lib64/libndpi.a")
set(STATIC_LIBNDPI_LIB "${STATIC_LIBNDPI_INSTALLDIR}/lib64/libndpi.a")
else()
set(STATIC_LIBNDPI_LIB "${STATIC_LIBNDPI_INSTALLDIR}/lib/libndpi.a")
endif()
else()
if(EXISTS "${STATIC_LIBNDPI_INSTALLDIR}/lib32")
if(EXISTS "${STATIC_LIBNDPI_INSTALLDIR}/lib32/libndpi.a")
set(STATIC_LIBNDPI_LIB "${STATIC_LIBNDPI_INSTALLDIR}/lib32/libndpi.a")
else()
set(STATIC_LIBNDPI_LIB "${STATIC_LIBNDPI_INSTALLDIR}/lib/libndpi.a")
@@ -313,9 +440,9 @@ if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI)
unset(pkgcfg_lib_NDPI_ndpi CACHE)
else()
if(NOT NDPI_NO_PKGCONFIG)
pkg_check_modules(NDPI REQUIRED libndpi>=4.7.0)
pkg_check_modules(NDPI REQUIRED libndpi>=5.0.0)
if(NOT pkgcfg_lib_NDPI_ndpi)
find_package(NDPI "4.8.0" REQUIRED)
find_package(NDPI "5.0.0" REQUIRED)
endif()
unset(STATIC_LIBNDPI_INC CACHE)
@@ -334,29 +461,47 @@ if(NOT pkgcfg_lib_PCAP_pcap)
endif()
target_compile_options(nDPId PRIVATE "-pthread")
target_compile_definitions(nDPId PRIVATE -D_GNU_SOURCE=1 -DPKG_VERSION=\"${PKG_VERSION}\" -DGIT_VERSION=\"${GIT_VERSION}\" ${NDPID_DEFS} ${EPOLL_DEFS} ${ZLIB_DEFS})
target_include_directories(nDPId PRIVATE "${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC})
target_link_libraries(nDPId "${STATIC_LIBNDPI_LIB}" "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}"
"-pthread")
target_compile_definitions(nDPId PRIVATE -D_GNU_SOURCE=1 -DPKG_VERSION=\"${PKG_VERSION}\" -DGIT_VERSION=\"${GIT_VERSION}\"
${NDPID_DEFS} ${EPOLL_DEFS} ${ZLIB_DEFS} ${PFRING_DEFS} ${OSSL_DEFS})
target_include_directories(nDPId PRIVATE "${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC} ${PFRING_KERNEL_INC} ${PFRING_INC})
target_link_libraries(nDPId "${STATIC_LIBNDPI_LIB}" "${STATIC_PFRING_LIB}" "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre2-8}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}" "${PF_RING_LIB}"
"${OSSL_LIBRARY}" "-pthread")
target_compile_definitions(nDPIsrvd PRIVATE -D_GNU_SOURCE=1 -DPKG_VERSION=\"${PKG_VERSION}\" -DGIT_VERSION=\"${GIT_VERSION}\" ${NDPID_DEFS} ${EPOLL_DEFS})
target_include_directories(nDPIsrvd PRIVATE ${NDPID_DEPS_INC})
target_include_directories(nDPId-test PRIVATE ${NDPID_DEPS_INC})
target_compile_options(nDPId-test PRIVATE "-Wno-unused-function" "-pthread")
target_compile_definitions(nDPId-test PRIVATE -D_GNU_SOURCE=1 -DNO_MAIN=1 -DPKG_VERSION=\"${PKG_VERSION}\" -DGIT_VERSION=\"${GIT_VERSION}\"
${NDPID_DEFS} ${EPOLL_DEFS} ${ZLIB_DEFS} ${NDPID_TEST_MPROF_DEFS})
${NDPID_DEFS} ${EPOLL_DEFS} ${ZLIB_DEFS} ${PFRING_DEFS} ${OSSL_DEFS} ${NDPID_TEST_MPROF_DEFS})
target_include_directories(nDPId-test PRIVATE
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC})
target_link_libraries(nDPId-test "${STATIC_LIBNDPI_LIB}" "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}"
"-pthread")
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC} ${PFRING_KERNEL_INC} ${PFRING_INC})
target_link_libraries(nDPId-test "${STATIC_LIBNDPI_LIB}" "${STATIC_PFRING_LIB}" "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre2-8}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}" "${PF_RING_LIB}"
"${OSSL_LIBRARY}" "-pthread")
if(CMAKE_C_COMPILER_ID STREQUAL "Clang")
add_executable(fuzz_ndpi_process_packet test/fuzz_ndpi_process_packet.c)
if(BUILD_NDPI)
add_dependencies(fuzz_ndpi_process_packet libnDPI)
endif()
target_compile_options(fuzz_ndpi_process_packet PRIVATE "-Wno-unused-function" "-fsanitize=fuzzer" "-pthread")
target_compile_definitions(fuzz_ndpi_process_packet PRIVATE -D_GNU_SOURCE=1
-DPKG_VERSION=\"${PKG_VERSION}\" -DGIT_VERSION=\"${GIT_VERSION}\"
${NDPID_DEFS} ${EPOLL_DEFS} ${ZLIB_DEFS} ${PFRING_DEFS})
target_include_directories(fuzz_ndpi_process_packet PRIVATE "${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}"
${NDPID_DEPS_INC} ${PFRING_KERNEL_INC} ${PFRING_INC})
target_link_libraries(fuzz_ndpi_process_packet "${STATIC_LIBNDPI_LIB}" "${STATIC_PFRING_LIB}" "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre2-8}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}" "${PF_RING_LIB}"
"-pthread")
target_link_options(fuzz_ndpi_process_packet PRIVATE "-fsanitize=fuzzer")
endif()
if(BUILD_EXAMPLES)
add_executable(nDPIsrvd-collectd examples/c-collectd/c-collectd.c)
add_executable(nDPIsrvd-collectd examples/c-collectd/c-collectd.c utils.c)
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-collectd libnDPI)
endif()
@@ -372,23 +517,26 @@ if(BUILD_EXAMPLES)
target_include_directories(nDPIsrvd-captured PRIVATE
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" ${NDPID_DEPS_INC})
target_link_libraries(nDPIsrvd-captured "${pkgcfg_lib_PCAP_pcap}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}"
"${pkgcfg_lib_PCRE_pcre2-8}" "${pkgcfg_lib_MAXMINDDB_maxminddb}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}")
add_executable(nDPIsrvd-json-dump examples/c-json-stdout/c-json-stdout.c)
target_compile_definitions(nDPIsrvd-json-dump PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-json-dump PRIVATE ${NDPID_DEPS_INC})
add_executable(nDPIsrvd-analysed examples/c-analysed/c-analysed.c utils.c)
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-analysed libnDPI)
endif()
target_compile_definitions(nDPIsrvd-analysed PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-analysed PRIVATE ${NDPID_DEPS_INC})
target_include_directories(nDPIsrvd-analysed PRIVATE
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" ${NDPID_DEPS_INC})
add_executable(nDPIsrvd-simple examples/c-simple/c-simple.c)
target_compile_definitions(nDPIsrvd-simple PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-simple PRIVATE ${NDPID_DEPS_INC})
if(ENABLE_COVERAGE)
add_dependencies(coverage nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-json-dump nDPIsrvd-simple)
add_dependencies(coverage nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-simple)
if(BUILD_NDPI)
add_dependencies(coverage libnDPI)
endif()
endif()
if(ENABLE_DBUS)
@@ -396,6 +544,7 @@ if(BUILD_EXAMPLES)
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-notifyd libnDPI)
endif()
target_compile_definitions(nDPIsrvd-notifyd PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-notifyd PRIVATE
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" "${NDPID_DEPS_INC}"
"${DBUS_INCLUDE_DIRS}")
@@ -403,16 +552,43 @@ if(BUILD_EXAMPLES)
install(TARGETS nDPIsrvd-notifyd DESTINATION bin)
endif()
install(TARGETS nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-json-dump nDPIsrvd-simple DESTINATION bin)
if(ENABLE_CURL)
add_executable(nDPIsrvd-influxd examples/c-influxd/c-influxd.c utils.c)
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-influxd libnDPI)
endif()
target_compile_definitions(nDPIsrvd-influxd PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-influxd PRIVATE
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" "${NDPID_DEPS_INC}"
"${CURL_INCLUDE_DIRS}")
target_link_libraries(nDPIsrvd-influxd "${CURL_LIBRARIES}")
install(TARGETS nDPIsrvd-influxd DESTINATION bin)
endif()
install(TARGETS nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-simple DESTINATION bin)
install(FILES examples/c-collectd/plugin_nDPIsrvd.conf examples/c-collectd/rrdgraph.sh DESTINATION share/nDPId/nDPIsrvd-collectd)
install(DIRECTORY examples/c-collectd/www DESTINATION share/nDPId/nDPIsrvd-collectd)
endif()
if(BUILD_RUST_EXAMPLES)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/target/release/rs-simple
COMMAND cargo build --release
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/examples/rs-simple
COMMENT "Build Rust executable with cargo: rs-simple"
)
add_custom_target(rs-simple ALL
DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/target/release/rs-simple
)
endif()
if(ENABLE_SYSTEMD)
configure_file(packages/systemd/ndpisrvd.service.in ndpisrvd.service @ONLY)
configure_file(packages/systemd/ndpid@.service.in ndpid@.service @ONLY)
install(FILES packages/systemd/default.cfg DESTINATION etc/default RENAME ndpid)
install(DIRECTORY DESTINATION etc/nDPId)
install(FILES "ndpid.conf.example" DESTINATION share/nDPId)
install(FILES "ndpisrvd.conf.example" DESTINATION share/nDPId)
install(FILES "${CMAKE_BINARY_DIR}/ndpisrvd.service" DESTINATION lib/systemd/system)
install(FILES "${CMAKE_BINARY_DIR}/ndpid@.service" DESTINATION lib/systemd/system)
endif()
@@ -430,14 +606,11 @@ install(FILES config.h
install(TARGETS nDPId DESTINATION sbin)
install(TARGETS nDPIsrvd nDPId-test DESTINATION bin)
if(BUILD_EXAMPLES)
install(FILES dependencies/nDPIsrvd.py examples/py-flow-dashboard/plotly_dash.py
install(FILES dependencies/nDPIsrvd.py
DESTINATION share/nDPId)
install(FILES examples/py-flow-info/flow-info.py
DESTINATION bin RENAME nDPIsrvd-flow-info.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-flow-dashboard/flow-dash.py
DESTINATION bin RENAME nDPIsrvd-flow-dash.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-json-stdout/json-stdout.py
DESTINATION bin RENAME nDPIsrvd-json-stdout.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
@@ -461,32 +634,53 @@ message(STATUS "CMAKE_BUILD_TYPE.........: ${CMAKE_BUILD_TYPE}")
message(STATUS "CMAKE_C_FLAGS............: ${CMAKE_C_FLAGS}")
message(STATUS "NDPID_DEFS...............: ${NDPID_DEFS}")
message(STATUS "FORCE_POLL...............: ${FORCE_POLL}")
message(STATUS "ENABLE_PFRING............: ${ENABLE_PFRING}")
if(ENABLE_PFRING)
message(STATUS "PFRING_LINK_STATIC.......: ${PFRING_LINK_STATIC}")
endif()
message(STATUS "ENABLE_CRYPTO............: ${ENABLE_CRYPTO}")
message(STATUS "ENABLE_COVERAGE..........: ${ENABLE_COVERAGE}")
message(STATUS "ENABLE_SANITIZER.........: ${ENABLE_SANITIZER}")
message(STATUS "ENABLE_SANITIZER_THREAD..: ${ENABLE_SANITIZER_THREAD}")
message(STATUS "ENABLE_MEMORY_PROFILING..: ${ENABLE_MEMORY_PROFILING}")
message(STATUS "ENABLE_ZLIB..............: ${ENABLE_ZLIB}")
if(STATIC_LIBNDPI_INSTALLDIR)
message(STATUS "STATIC_LIBNDPI_INSTALLDIR: ${STATIC_LIBNDPI_INSTALLDIR}")
endif()
message(STATUS "BUILD_NDPI...............: ${BUILD_NDPI}")
message(STATUS "BUILD_EXAMPLES...........: ${BUILD_EXAMPLES}")
message(STATUS "BUILD_RUST_EXAMPLES......: ${BUILD_RUST_EXAMPLES}")
if(BUILD_EXAMPLES)
message(STATUS "ENABLE_DBUS..............: ${ENABLE_DBUS}")
message(STATUS "ENABLE_CURL..............: ${ENABLE_CURL}")
endif()
if(BUILD_NDPI)
message(STATUS "NDPI_ADDITIONAL_ARGS.....: ${NDPI_ADDITIONAL_ARGS}")
endif()
message(STATUS "NDPI_NO_PKGCONFIG........: ${NDPI_NO_PKGCONFIG}")
message(STATUS "--------------------------")
if(PFRING_INSTALLDIR)
message(STATUS "PFRING_INSTALLDIR........: ${PFRING_INSTALLDIR}")
message(STATUS "- PFRING_INC.............: ${PFRING_INC}")
message(STATUS "- PFRING_KERNEL_INC......: ${PFRING_KERNEL_INC}")
message(STATUS "- STATIC_PFRING_LIB......: ${STATIC_PFRING_LIB}")
message(STATUS "- SHARED_PFRING_LIB......: ${PF_RING_LIB}")
message(STATUS "--------------------------")
endif()
if(STATIC_LIBNDPI_INSTALLDIR)
message(STATUS "STATIC_LIBNDPI_INSTALLDIR: ${STATIC_LIBNDPI_INSTALLDIR}")
endif()
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI OR NDPI_NO_PKGCONFIG)
message(STATUS "- STATIC_LIBNDPI_INC....: ${STATIC_LIBNDPI_INC}")
message(STATUS "- STATIC_LIBNDPI_LIB....: ${STATIC_LIBNDPI_LIB}")
message(STATUS "- NDPI_WITH_GCRYPT......: ${NDPI_WITH_GCRYPT}")
message(STATUS "- NDPI_WITH_PCRE........: ${NDPI_WITH_PCRE}")
message(STATUS "- NDPI_WITH_MAXMINDDB...: ${NDPI_WITH_MAXMINDDB}")
message(STATUS "- STATIC_LIBNDPI_INC.....: ${STATIC_LIBNDPI_INC}")
message(STATUS "- STATIC_LIBNDPI_LIB.....: ${STATIC_LIBNDPI_LIB}")
message(STATUS "- NDPI_WITH_GCRYPT.......: ${NDPI_WITH_GCRYPT}")
message(STATUS "- NDPI_WITH_PCRE.........: ${NDPI_WITH_PCRE}")
message(STATUS "- NDPI_WITH_MAXMINDDB....: ${NDPI_WITH_MAXMINDDB}")
endif()
if(NOT STATIC_LIBNDPI_INSTALLDIR AND NOT BUILD_NDPI)
message(STATUS "- DEFAULT_NDPI_INCLUDE..: ${DEFAULT_NDPI_INCLUDE}")
message(STATUS "- DEFAULT_NDPI_INCLUDE...: ${DEFAULT_NDPI_INCLUDE}")
endif()
if(NOT NDPI_NO_PKGCONFIG)
message(STATUS "- pkgcfg_lib_NDPI_ndpi..: ${pkgcfg_lib_NDPI_ndpi}")
message(STATUS "- pkgcfg_lib_NDPI_ndpi...: ${pkgcfg_lib_NDPI_ndpi}")
endif()
message(STATUS "--------------------------")
if(CMAKE_C_COMPILER_ID STREQUAL "Clang")
message(STATUS "Fuzzing enabled")
endif()

View File

@@ -1,21 +1,46 @@
FROM ubuntu:22.04 AS builder
FROM ubuntu:22.04 AS builder-ubuntu-2204
WORKDIR /root
RUN apt-get -y update && apt-get install -y --no-install-recommends autoconf automake build-essential ca-certificates wget unzip git make cmake pkg-config libpcap-dev autoconf libtool && apt-get clean
RUN git clone https://github.com/utoni/nDPId.git
RUN apt-get -y update \
&& apt-get install -y --no-install-recommends \
autoconf automake build-essential ca-certificates cmake git \
libpcap-dev libcurl4-openssl-dev libdbus-1-dev libtool make pkg-config unzip wget \
&& apt-get clean \
&& git clone https://github.com/utoni/nDPId.git
WORKDIR /root/nDPId
RUN cmake -S . -B build -DBUILD_NDPI=ON && cmake --build build --verbose
RUN cmake -S . -B build -DBUILD_NDPI=ON -DBUILD_EXAMPLES=ON \
-DENABLE_DBUS=ON -DENABLE_CURL=ON \
&& cmake --build build --verbose
FROM ubuntu:22.04
USER root
WORKDIR /
COPY --from=builder /root/nDPId/build/nDPId /usr/sbin/nDPId
COPY --from=builder /root/nDPId/build/nDPIsrvd /usr/bin/nDPIsrvd
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPId /usr/sbin/nDPId
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd /usr/bin/nDPIsrvd
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPId-test /usr/bin/nDPId-test
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-collectd /usr/bin/nDPIsrvd-collectd
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-captured /usr/bin/nDPIsrvd-captured
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-analysed /usr/bin/nDPIsrvd-anaylsed
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-analysed /usr/bin/nDPIsrvd-anaylsed
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-notifyd /usr/bin/nDPIsrvd-notifyd
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-influxd /usr/bin/nDPIsrvd-influxd
COPY --from=builder-ubuntu-2204 /root/nDPId/build/nDPIsrvd-simple /usr/bin/nDPIsrvd-simple
RUN apt-get -y update && apt-get install -y --no-install-recommends libpcap-dev && apt-get clean
RUN apt-get -y update \
&& apt-get install -y --no-install-recommends libpcap-dev \
&& apt-get clean
USER nobody
RUN /usr/bin/nDPIsrvd -h || { RC=$?; test ${RC} -eq 1; }
RUN /usr/sbin/nDPId -h || { RC=$?; test ${RC} -eq 1; }
RUN /usr/bin/nDPIsrvd -h || { RC=$?; test ${RC} -eq 1; }; \
/usr/sbin/nDPId -h || { RC=$?; test ${RC} -eq 1; }
FROM archlinux:base-devel AS builder-archlinux
WORKDIR /root
RUN pacman --noconfirm -Sy cmake git unzip wget && mkdir /build && chown nobody /build && cd /build \
&& runuser -u nobody git clone https://github.com/utoni/nDPId.git
WORKDIR /build/nDPId/packages/archlinux
RUN runuser -u nobody makepkg

155
README.md
View File

@@ -16,26 +16,26 @@
# Disclaimer
Please respect&protect the privacy of others.
Please respect & protect the privacy of others.
The purpose of this software is not to spy on others, but to detect network anomalies and malicious traffic.
# Abstract
nDPId is a set of daemons and tools to capture, process and classify network traffic.
It's minimal dependencies (besides a half-way modern c library and POSIX threads) are libnDPI (>=4.8.0 or current github dev branch) and libpcap.
Its minimal dependencies (besides a to some extent modern C library and POSIX threads) are libnDPI (>=5.0.0 or current github dev branch) and libpcap.
The daemon `nDPId` is capable of multithreading for packet processing, but w/o mutexes for performance reasons.
Instead synchronization is achieved by a packet distribution mechanism.
To balance all workload to all threads (more or less) equally a unique identifier represented as hash value is calculated using a 3-tuple consisting of IPv4/IPv6 src/dst address, IP header value of the layer4 protocol and (for TCP/UDP) src/dst port. Other protocols e.g. ICMP/ICMPv6 are lacking relevance for DPI, thus nDPId does not distinguish between different ICMP/ICMPv6 flows coming from the same host. Saves memory and performance, but might change in the future.
Instead, synchronization is achieved by a packet distribution mechanism.
To balance the workload to all threads (more or less) equally, a unique identifier represented as hash value is calculated using a 3-tuple consisting of: IPv4/IPv6 src/dst address; IP header value of the layer4 protocol; and (for TCP/UDP) src/dst port. Other protocols e.g. ICMP/ICMPv6 lack relevance for DPI, thus nDPId does not distinguish between different ICMP/ICMPv6 flows coming from the same host. This saves memory and performance, but might change in the future.
`nDPId` uses libnDPI's JSON serialization interface to generate a JSON strings for each event it receive from the library and which it then sends out to a UNIX-socket (default: /tmp/ndpid-collector.sock ). From such a socket, `nDPIsrvd` (or other custom applications) can retrieve incoming JSON-messages and further proceed working/distributing messages to higher-level applications.
`nDPId` uses libnDPI's JSON serialization interface to generate a JSON messages for each event it receives from the library and which it then sends out to a UNIX-socket (default: `/tmp/ndpid-collector.sock` ). From such a socket, `nDPIsrvd` (or other custom applications) can retrieve incoming JSON-messages and further proceed working/distributing messages to higher-level applications.
Unfortunately `nDPIsrvd` does currently not support any encryption/authentication for TCP connections (TODO!).
Unfortunately, `nDPIsrvd` does not yet support any encryption/authentication for TCP connections (TODO!).
# Architecture
This project uses some kind of microservice architecture.
This project uses a kind of microservice architecture.
```text
connect to UNIX socket [1] connect to UNIX/TCP socket [2]
@@ -71,30 +71,30 @@ where:
JSON messages streamed by both `nDPId` and `nDPIsrvd` are presented with:
* a 5-digit-number describing (as decimal number) of the **entire** JSON string including the newline `\n` at the end;
* a 5-digit-number describing (as decimal number) the **entire** JSON message including the newline `\n` at the end;
* the JSON messages
```text
[5-digit-number][JSON string]
[5-digit-number][JSON message]
```
as with the following example:
```text
01223{"flow_event_id":7,"flow_event_name":"detection-update","thread_id":12,"packet_id":307,"source":"wlan0",[...]}
00458{"packet_event_id":2,"packet_event_name":"packet-flow","thread_id":11,"packet_id":324,"source":"wlan0",[...]]}
00572{"flow_event_id":1,"flow_event_name":"new","thread_id":11,"packet_id":324,"source":"wlan0",[...]}
01223{"flow_event_id":7,"flow_event_name":"detection-update","thread_id":12,"packet_id":307,"source":"wlan0", ...snip...}
00458{"packet_event_id":2,"packet_event_name":"packet-flow","thread_id":11,"packet_id":324,"source":"wlan0", ...snip...}
00572{"flow_event_id":1,"flow_event_name":"new","thread_id":11,"packet_id":324,"source":"wlan0", ...snip...}
```
The full stream of `nDPId` generated JSON-events can be retrieved directly from `nDPId`, without relying on `nDPIsrvd`, by providing a properly managed UNIX-socket.
Technical details about JSON-messages format can be obtained from related `.schema` file included in the `schema` directory
Technical details about the JSON-message format can be obtained from the related `.schema` file included in the `schema` directory
# Events
`nDPId` generates JSON strings whereas each string is assigned to a certain event.
Those events specify the contents (key-value-pairs) of the JSON string.
`nDPId` generates JSON messages whereby each string is assigned to a certain event.
Those events specify the contents (key-value-pairs) of the JSON message.
They are divided into four categories, each with a number of subevents.
## Error Events
@@ -132,10 +132,10 @@ Detailed JSON-schema is available [here](schema/daemon_event_schema.json)
## Packet Events
There are 2 events containing base64 encoded packet payload either belonging to a flow or not:
There are 2 events containing base64 encoded packet payloads either belonging to a flow or not:
1. packet: does not belong to any flow
2. packet-flow: does belong to a flow e.g. TCP/UDP or ICMP
2. packet-flow: belongs to a flow e.g. TCP/UDP or ICMP
Detailed JSON-schema is available [here](schema/packet_event_schema.json)
@@ -143,11 +143,11 @@ Detailed JSON-schema is available [here](schema/packet_event_schema.json)
There are 9 distinct events related to a flow:
1. new: a new TCP/UDP/ICMP flow seen which will be tracked
2. end: a TCP connections terminates
2. end: a TCP connection terminates
3. idle: a flow timed out, because there was no packet on the wire for a certain amount of time
4. update: inform nDPIsrvd or other apps about a long-lasting flow, whose detection was finished a long time ago but is still active
5. analyse: provide some information about extracted features of a flow (Experimental; disabled per default, enable with `-A`)
6. guessed: `libnDPI` was not able to reliable detect a layer7 protocol and falls back to IP/Port based detection
6. guessed: `libnDPI` was not able to reliably detect a layer7 protocol and falls back to IP/Port based detection
7. detected: `libnDPI` sucessfully detected a layer7 protocol
8. detection-update: `libnDPI` dissected more layer7 protocol data (after detection already done)
9. not-detected: neither detected nor guessed
@@ -158,8 +158,9 @@ Detailed JSON-schema is available [here](schema/flow_event_schema.json). Also, a
A flow can have three different states while it is been tracked by `nDPId`.
1. skipped: the flow will be tracked, but no detection will happen to safe memory, see command line argument `-I` and `-E`
2. finished: detection finished and the memory used for the detection is free'd
1. skipped: the flow will be tracked, but no detection will happen to reduce memory usage.
See command line argument `-I` and `-E`
2. finished: detection finished and the memory used for the detection is freed
3. info: detection is in progress and all flow memory required for `libnDPI` is allocated (this state consumes most memory)
# Build (CMake)
@@ -181,7 +182,7 @@ see below for a full/test live-session
![](examples/ndpid_install_and_run.gif)
Based on your building environment and/or desiderata, you could need:
Based on your build environment and/or desiderata, you could need:
```shell
mkdir build
@@ -192,43 +193,49 @@ ccmake ..
or to build with a staticially linked libnDPI:
```shell
mkdir build
cd build
cmake .. -DSTATIC_LIBNDPI_INSTALLDIR=[path/to/your/libnDPI/installdir]
cmake -S . -B ./build \
-DSTATIC_LIBNDPI_INSTALLDIR=[path/to/your/libnDPI/installdir] \
-DNDPI_NO_PKGCONFIG=ON
cmake --build ./build
```
If you're using the latter one, make sure that you've configured libnDPI with `./configure --prefix=[path/to/your/libnDPI/installdir]`
and do not forget to set the all necessary CMake variables to link against shared libraries used by your nDPI build.
If you use the latter, make sure that you've configured libnDPI with `./configure --prefix=[path/to/your/libnDPI/installdir]`
and remember to set the all-necessary CMake variables to link against shared libraries used by your nDPI build.
You'll also need to use `-DNDPI_NO_PKGCONFIG=ON` if `STATIC_LIBNDPI_INSTALLDIR` does not contain a pkg-config file.
e.g.:
```shell
mkdir build
cd build
cmake .. -DSTATIC_LIBNDPI_INSTALLDIR=[path/to/your/libnDPI/installdir] -DNDPI_WITH_GCRYPT=ON -DNDPI_WITH_PCRE=OFF -DNDPI_WITH_MAXMINDDB=OFF
cmake -S . -B ./build \
-DSTATIC_LIBNDPI_INSTALLDIR=[path/to/your/libnDPI/installdir] \
-DNDPI_NO_PKGCONFIG=ON \
-DNDPI_WITH_GCRYPT=ON -DNDPI_WITH_PCRE=OFF -DNDPI_WITH_MAXMINDDB=OFF
cmake --build ./build
```
Or let a shell script do the work for you:
```shell
mkdir build
cd build
cmake .. -DBUILD_NDPI=ON
cmake -S . -B ./build \
-DBUILD_NDPI=ON
cmake --build ./build
```
The CMake cache variable `-DBUILD_NDPI=ON` builds a version of `libnDPI` residing as git submodule in this repository.
The CMake cache variable `-DBUILD_NDPI=ON` builds a version of `libnDPI` residing as a git submodule in this repository.
# run
As mentioned above, in order to run `nDPId` a UNIX-socket need to be provided in order to stream our related JSON-data.
As mentioned above, in order to run `nDPId`, a UNIX-socket needs to be provided in order to stream our related JSON-data.
Such a UNIX-socket can be provided by both the included `nDPIsrvd` daemon, or, if you simply need a quick check, with the [ncat](https://nmap.org/book/ncat-man.html) utility, with a simple `ncat -U /tmp/listen.sock -l -k`. Remember that OpenBSD `netcat` is not able to handle multiple connections reliably.
Once the socket is ready, you can run `nDPId` capturing and analyzing your own traffic, with something similar to:
Once the socket is ready, you can run `nDPId` capturing and analyzing your own traffic, with something similar to: `sudo nDPId -c /tmp/listen.sock`
If you're using OpenBSD `netcat`, you need to run: `sudo nDPId -c /tmp/listen.sock -o max-reader-threads=1`
Make sure that the UNIX socket is accessible by the user (see -u) to whom nDPId changes to, default: nobody.
Of course, both `ncat` and `nDPId` need to point to the same UNIX-socket (`nDPId` provides the `-c` option, exactly for this. As a default, `nDPId` refer to `/tmp/ndpid-collector.sock`, and the same default-path is also used by `nDPIsrvd` as for the incoming socket).
Of course, both `ncat` and `nDPId` need to point to the same UNIX-socket (`nDPId` provides the `-c` option, exactly for this. By default, `nDPId` refers to `/tmp/ndpid-collector.sock`, and the same default-path is also used by `nDPIsrvd` for the incoming socket).
You also need to provide `nDPId` some real-traffic. You can capture your own traffic, with something similar to:
Give `nDPId` some real-traffic. You can capture your own traffic, with something similar to:
```shell
socat -u UNIX-Listen:/tmp/listen.sock,fork - # does the same as `ncat`
@@ -256,7 +263,7 @@ Daemons:
make -C [path-to-a-build-dir] daemon
```
Or you can proceed with a manual approach with:
Or a manual approach with:
```shell
./nDPIsrvd -d
@@ -274,11 +281,6 @@ And why not a flow-info example?
./examples/py-flow-info/flow-info.py
```
or
```shell
./nDPIsrvd-json-dump
```
or anything below `./examples`.
# nDPId tuning
@@ -291,47 +293,56 @@ Suboptions for `-o`:
Format: `subopt` (unit, comment): description
* `max-flows-per-thread` (N, caution advised): affects max. memory usage
* `max-idle-flows-per-thread` (N, safe): max. allowed idle flows which memory get's free'd after `flow-scan-interval`
* `max-idle-flows-per-thread` (N, safe): max. allowed idle flows whose memory gets freed after `flow-scan-interval`
* `max-reader-threads` (N, safe): amount of packet processing threads, every thread can have a max. of `max-flows-per-thread` flows
* `daemon-status-interval` (ms, safe): specifies how often daemon event `status` will be generated
* `compression-scan-interval` (ms, untested): specifies how often `nDPId` should scan for inactive flows ready for compression
* `compression-flow-inactivity` (ms, untested): the earliest period of time that must elapse before `nDPId` may consider compressing a flow that did neither send nor receive any data
* `flow-scan-interval` (ms, safe): min. amount of time after which `nDPId` will scan for idle or long-lasting flows
* `generic-max-idle-time` (ms, untested): time after which a non TCP/UDP/ICMP flow will time out
* `icmp-max-idle-time` (ms, untested): time after which an ICMP flow will time out
* `udp-max-idle-time` (ms, caution advised): time after which an UDP flow will time out
* `tcp-max-idle-time` (ms, caution advised): time after which a TCP flow will time out
* `tcp-max-post-end-flow-time` (ms, caution advised): a TCP flow that received a FIN or RST will wait that amount of time before flow tracking will be stopped and the flow memory free'd
* `max-packets-per-flow-to-send` (N, safe): max. `packet-flow` events that will be generated for the first N packets of each flow
* `max-packets-per-flow-to-process` (N, caution advised): max. packets that will be processed by `libnDPI`
* `daemon-status-interval` (ms, safe): specifies how often daemon event `status` is generated
* `compression-scan-interval` (ms, untested): specifies how often `nDPId` scans for inactive flows ready for compression
* `compression-flow-inactivity` (ms, untested): the shortest period of time elapsed before `nDPId` considers compressing a flow (e.g. nDPI flow struct) that neither sent nor received any data
* `flow-scan-interval` (ms, safe): min. amount of time after which `nDPId` scans for idle or long-lasting flows
* `generic-max-idle-time` (ms, untested): time after which a non TCP/UDP/ICMP flow times out
* `icmp-max-idle-time` (ms, untested): time after which an ICMP flow times out
* `udp-max-idle-time` (ms, caution advised): time after which an UDP flow times out
* `tcp-max-idle-time` (ms, caution advised): time after which a TCP flow times out
* `tcp-max-post-end-flow-time` (ms, caution advised): a TCP flow that received a FIN or RST waits this amount of time before flow tracking stops and the flow memory is freed
* `max-packets-per-flow-to-send` (N, safe): max. `packet-flow` events generated for the first N packets of each flow
* `max-packets-per-flow-to-process` (N, caution advised): max. amount of packets processed by `libnDPI`
* `max-packets-per-flow-to-analyze` (N, safe): max. packets to analyze before sending an `analyse` event, requires `-A`
* `error-event-threshold-n` (N, safe): max. error events to sent until threshold time passed by
* `error-event-threshold-time` (N, safe): time after which the error event thresold will be reset
* `error-event-threshold-n` (N, safe): max. error events to send until threshold time has passed
* `error-event-threshold-time` (N, safe): time after which the error event threshold resets
# test
The recommended way to run integration / diff tests:
The recommended way to run regression / diff tests:
```shell
mkdir build
cd build
cmake .. -DBUILD_NDPI=ON
make nDPId-test test
cmake -S . -B ./build-like-ci \
-DBUILD_NDPI=ON -DENABLE_ZLIB=ON -DBUILD_EXAMPLES=ON
# optional: -DENABLE_CURL=ON -DENABLE_SANITIZER=ON
./test/run_tests.sh ./libnDPI ./build-like-ci/nDPId-test
# or: make -C ./build-like-ci test
```
Alternatively you can run some integration tests manually:
`./test/run_tests.sh [/path/to/libnDPI/root/directory] [/path/to/nDPId-test]`
e.g.:
`./test/run_tests.sh [${HOME}/git/nDPI] [${HOME}/git/nDPId/build/nDPId-test]`
Run `./test/run_tests.sh` to see some usage information.
Remember that all test results are tied to a specific libnDPI commit hash
as part of the `git submodule`. Using `test/run_tests.sh` for other commit hashes
will most likely result in PCAP diff's.
will most likely result in PCAP diffs.
Why not use `examples/py-flow-dashboard/flow-dash.py` to visualize nDPId's output.
# Code Coverage
You may generate code coverage by using:
```shell
cmake -S . -B ./build-coverage \
-DENABLE_COVERAGE=ON -DENABLE_ZLIB=ON
# optional: -DBUILD_NDPI=ON
make -C ./build-coverage coverage-clean
make -C ./build-coverage clean
make -C ./build-coverage all
./test/run_tests.sh ./libnDPI ./build-coverage/nDPId-test
make -C ./build-coverage coverage
make -C ./build-coverage coverage-view
```
# Contributors

18
SECURITY.md Normal file
View File

@@ -0,0 +1,18 @@
# Security Policy
I encourage you to submit a pull request if you have a solution or fix for anything even security vulnerabilities.
Your contributions help advance and enhance safety for all users :star:.
## Reporting a Bug :bug: :bug:
Simply use GitHub issues to report a bug with related information to debug the issue :pencil:.
## Reporting a Vulnerability :closed_lock_with_key: :eyes:
For sensitive security issues, please email <toni@impl.cc> including the following information:
- Description of the vulnerability
- Steps to reproduce the issue
- Affected versions i.e. release tags, git commit hashes or git branch
- If applicable, a data sample (preferably `pcap/pcapng`) to reproduce
- If known, any mitigations or fixes for the issue

View File

@@ -1,10 +1,10 @@
# TODOs
1.7:
1.8:
* let nDPIsrvd (collector) connect to other nDPIsrvd instances (as distributor)
* nDPIsrvd GnuTLS support for TCP/IP distributor connections
* PF\_RING integration
* provide nDPId-exportd daemon which will only send captured packets to an nDPId instance running on a different machine
2.0.0:
@@ -17,3 +17,4 @@ no release plan:
* improve UDP/TCP timeout handling by reading netfilter conntrack timeouts from /proc (or just read conntrack table entries)
* detect interface / timeout changes and apply them to nDPId
* switch to MIT or BSD License
* libdaq integration

View File

@@ -16,11 +16,13 @@
#define NETWORK_BUFFER_LENGTH_DIGITS 5u
#define NETWORK_BUFFER_LENGTH_DIGITS_STR "5"
#define PFRING_BUFFER_SIZE 65536u
#define TIME_S_TO_US(s) (s * 1000llu * 1000llu)
/* nDPId default config options */
#define nDPId_PIDFILE "/tmp/ndpid.pid"
#define nDPId_MAX_FLOWS_PER_THREAD 4096u
#define nDPId_MAX_FLOWS_PER_THREAD 65536u
#define nDPId_MAX_IDLE_FLOWS_PER_THREAD (nDPId_MAX_FLOWS_PER_THREAD / 32u)
#define nDPId_MAX_READER_THREADS 32u
#define nDPId_ERROR_EVENT_THRESHOLD_N 16u
@@ -38,7 +40,7 @@
#define nDPId_THREAD_DISTRIBUTION_SEED 0x03dd018b
#define nDPId_PACKETS_PLEN_MAX 8192u /* 8kB */
#define nDPId_PACKETS_PER_FLOW_TO_SEND 15u
#define nDPId_PACKETS_PER_FLOW_TO_PROCESS NDPI_DEFAULT_MAX_NUM_PKTS_PER_FLOW_TO_DISSECT
#define nDPId_PACKETS_PER_FLOW_TO_PROCESS 32u
#define nDPId_PACKETS_PER_FLOW_TO_ANALYZE 32u
#define nDPId_ANALYZE_PLEN_MAX 1504u
#define nDPId_ANALYZE_PLEN_BIN_LEN 32u

View File

@@ -196,10 +196,10 @@ static int jsmn_parse_string(jsmn_parser *parser, const char *js,
jsmntok_t *token;
int start = parser->pos;
parser->pos++;
/* Skip starting quote */
parser->pos++;
for (; parser->pos < len && js[parser->pos] != '\0'; parser->pos++) {
char c = js[parser->pos];

View File

@@ -33,8 +33,8 @@
#define nDPIsrvd_JSON_KEY_STRLEN (32)
#define nDPIsrvd_HASHKEY_SEED (0x995fd871u)
#define nDPIsrvd_ARRAY_LENGTH(s) (sizeof(s) / sizeof(s[0]))
#define nDPIsrvd_STRLEN_SZ(s) (sizeof(s) / sizeof(s[0]) - sizeof(s[0]))
#define nDPIsrvd_ARRAY_LENGTH(s) ((size_t)(sizeof(s) / sizeof(s[0])))
#define nDPIsrvd_STRLEN_SZ(s) ((size_t)((sizeof(s) / sizeof(s[0])) - sizeof(s[0])))
#define TOKEN_GET_SZ(sock, ...) nDPIsrvd_get_token(sock, __VA_ARGS__, NULL)
#define TOKEN_VALUE_EQUALS(sock, token, string_to_check, string_to_check_length) \
nDPIsrvd_token_value_equals(sock, token, string_to_check, string_to_check_length)
@@ -207,9 +207,9 @@ struct nDPIsrvd_buffer
struct nDPIsrvd_json_buffer
{
struct nDPIsrvd_buffer buf;
char * json_string;
size_t json_string_start;
nDPIsrvd_ull json_string_length;
char * json_message;
size_t json_message_start;
nDPIsrvd_ull json_message_length;
};
struct nDPIsrvd_jsmn
@@ -277,7 +277,7 @@ static inline int nDPIsrvd_base64decode(char const * in, size_t inLen, unsigned
while (in < end)
{
unsigned char c = d[*(unsigned char *)in++];
unsigned char c = d[*(unsigned char const *)in++];
switch (c)
{
@@ -402,9 +402,9 @@ static inline void nDPIsrvd_buffer_free(struct nDPIsrvd_buffer * const buffer)
static inline void nDPIsrvd_json_buffer_reset(struct nDPIsrvd_json_buffer * const json_buffer)
{
json_buffer->json_string_start = 0ul;
json_buffer->json_string_length = 0ull;
json_buffer->json_string = NULL;
json_buffer->json_message_start = 0UL;
json_buffer->json_message_length = 0ULL;
json_buffer->json_message = NULL;
}
static inline int nDPIsrvd_json_buffer_init(struct nDPIsrvd_json_buffer * const json_buffer, size_t json_buffer_size)
@@ -497,7 +497,7 @@ static inline int nDPIsrvd_set_read_timeout(struct nDPIsrvd_socket * const sock,
return 0;
}
static inline int nDPIsrvd_set_nonblock(struct nDPIsrvd_socket * const sock)
static inline int nDPIsrvd_set_nonblock(struct nDPIsrvd_socket const * const sock)
{
int flags;
@@ -655,6 +655,11 @@ static inline void nDPIsrvd_socket_free(struct nDPIsrvd_socket ** const sock)
static inline int nDPIsrvd_setup_address(struct nDPIsrvd_address * const address, char const * const destination)
{
if (address == NULL || destination == NULL)
{
return 1;
}
size_t len = strlen(destination);
char const * first_colon = strchr(destination, ':');
char const * last_colon = strrchr(destination, ':');
@@ -681,7 +686,7 @@ static inline int nDPIsrvd_setup_address(struct nDPIsrvd_address * const address
{
address->raw.sa_family = AF_INET;
address->size = sizeof(address->in);
address->in.sin_port = htons(atoi(last_colon + 1));
address->in.sin_port = htons((uint16_t)atoi(last_colon + 1));
sock_addr = &address->in.sin_addr;
if (len < 7)
@@ -693,7 +698,7 @@ static inline int nDPIsrvd_setup_address(struct nDPIsrvd_address * const address
{
address->raw.sa_family = AF_INET6;
address->size = sizeof(address->in6);
address->in6.sin6_port = htons(atoi(last_colon + 1));
address->in6.sin6_port = htons((uint16_t)atoi(last_colon + 1));
sock_addr = &address->in6.sin6_addr;
if (len < 2)
@@ -798,7 +803,7 @@ static inline enum nDPIsrvd_conversion_return str_value_to_ull(char const * cons
return CONVERSION_OK;
}
static inline nDPIsrvd_hashkey nDPIsrvd_build_key(char const * str, int len)
static inline nDPIsrvd_hashkey nDPIsrvd_build_key(char const * str, size_t len)
{
uint32_t hash = nDPIsrvd_HASHKEY_SEED;
uint32_t c;
@@ -814,11 +819,11 @@ static inline nDPIsrvd_hashkey nDPIsrvd_build_key(char const * str, int len)
static inline void nDPIsrvd_drain_buffer(struct nDPIsrvd_json_buffer * const json_buffer)
{
memmove(json_buffer->buf.ptr.raw,
json_buffer->buf.ptr.raw + json_buffer->json_string_length,
json_buffer->buf.used - json_buffer->json_string_length);
json_buffer->buf.used -= json_buffer->json_string_length;
json_buffer->json_string_length = 0;
json_buffer->json_string_start = 0;
json_buffer->buf.ptr.raw + json_buffer->json_message_length,
json_buffer->buf.used - json_buffer->json_message_length);
json_buffer->buf.used -= json_buffer->json_message_length;
json_buffer->json_message_length = 0;
json_buffer->json_message_start = 0;
}
static inline nDPIsrvd_hashkey nDPIsrvd_vbuild_jsmn_key(char const * const json_key, va_list ap)
@@ -883,7 +888,7 @@ static inline char const * nDPIsrvd_get_jsmn_token_value(struct nDPIsrvd_socket
*value_length = jt->end - jt->start;
}
return sock->buffer.json_string + jt->start;
return sock->buffer.json_message + jt->start;
}
static inline char const * nDPIsrvd_jsmn_token_to_string(struct nDPIsrvd_socket const * const sock,
@@ -905,7 +910,7 @@ static inline char const * nDPIsrvd_jsmn_token_to_string(struct nDPIsrvd_socket
*string_length = jt->end - jt->start;
}
return sock->buffer.json_string + jt->start;
return sock->buffer.json_message + jt->start;
}
static inline int nDPIsrvd_get_token_size(struct nDPIsrvd_socket const * const sock,
@@ -931,7 +936,7 @@ static inline char const * nDPIsrvd_get_token_value(struct nDPIsrvd_socket const
return NULL;
}
return sock->buffer.json_string + t->start;
return sock->buffer.json_message + t->start;
}
static inline struct nDPIsrvd_json_token const * nDPIsrvd_get_next_token(struct nDPIsrvd_socket const * const sock,
@@ -1094,19 +1099,24 @@ static inline struct nDPIsrvd_json_token * nDPIsrvd_find_token(struct nDPIsrvd_s
static inline struct nDPIsrvd_json_token * nDPIsrvd_add_token(struct nDPIsrvd_socket * const sock,
nDPIsrvd_hashkey hash_value,
int value_token_index)
size_t value_token_index)
{
struct nDPIsrvd_json_token * token = nDPIsrvd_find_token(sock, hash_value);
if (value_token_index >= nDPIsrvd_MAX_JSON_TOKENS)
{
return NULL;
}
if (token != NULL)
{
token->token_index = value_token_index;
token->token_index = (int)value_token_index;
return token;
}
else
{
struct nDPIsrvd_json_token jt = {.token_keys_hash = hash_value, .token_index = value_token_index, .hh = {}};
struct nDPIsrvd_json_token jt = {.token_keys_hash = hash_value, .token_index = (int)value_token_index, .hh = {}};
utarray_push_back(sock->json.tokens, &jt);
HASH_ADD_INT(sock->json.token_table,
@@ -1120,10 +1130,11 @@ static inline struct nDPIsrvd_json_token * nDPIsrvd_add_token(struct nDPIsrvd_so
static inline int nDPIsrvd_walk_tokens(
struct nDPIsrvd_socket * const sock, nDPIsrvd_hashkey h, size_t b, int count, uint8_t is_value, uint8_t depth)
{
int i, j;
int i;
int j;
jsmntok_t const * key;
jsmntok_t const * const t = &sock->jsmn.tokens[b];
char const * const js = sock->buffer.json_string;
char const * const js = sock->buffer.json_message;
if (depth >= 16)
{
@@ -1220,7 +1231,7 @@ static inline struct nDPIsrvd_instance * nDPIsrvd_get_instance(struct nDPIsrvd_s
}
static inline struct nDPIsrvd_thread_data * nDPIsrvd_get_thread_data(
struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_socket const * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_json_token const * const thread_id_token,
struct nDPIsrvd_json_token const * const ts_usec_token)
@@ -1236,7 +1247,7 @@ static inline struct nDPIsrvd_thread_data * nDPIsrvd_get_thread_data(
{
nDPIsrvd_ull thread_key;
TOKEN_VALUE_TO_ULL(sock, thread_id_token, &thread_key);
thread_id = thread_key;
thread_id = (nDPIsrvd_hashkey)thread_key;
}
HASH_FIND_INT(instance->thread_data_table, &thread_id, thread_data);
@@ -1444,36 +1455,36 @@ static inline enum nDPIsrvd_parse_return nDPIsrvd_parse_line(struct nDPIsrvd_jso
}
errno = 0;
json_buffer->json_string_length = strtoull((const char *)json_buffer->buf.ptr.text, &json_buffer->json_string, 10);
json_buffer->json_string_length += json_buffer->json_string - json_buffer->buf.ptr.text;
json_buffer->json_string_start = json_buffer->json_string - json_buffer->buf.ptr.text;
json_buffer->json_message_length = strtoull((const char *)json_buffer->buf.ptr.text, &json_buffer->json_message, 10);
json_buffer->json_message_length += json_buffer->json_message - json_buffer->buf.ptr.text;
json_buffer->json_message_start = json_buffer->json_message - json_buffer->buf.ptr.text;
if (errno == ERANGE)
{
return PARSE_SIZE_EXCEEDS_CONVERSION_LIMIT;
}
if (json_buffer->json_string == json_buffer->buf.ptr.text)
if (json_buffer->json_message == json_buffer->buf.ptr.text)
{
return PARSE_SIZE_MISSING;
}
if (json_buffer->json_string_length > json_buffer->buf.max)
if (json_buffer->json_message_length > json_buffer->buf.max)
{
return PARSE_STRING_TOO_BIG;
}
if (json_buffer->json_string_length > json_buffer->buf.used)
if (json_buffer->json_message_length > json_buffer->buf.used)
{
return PARSE_NEED_MORE_DATA;
}
if (json_buffer->buf.ptr.text[json_buffer->json_string_length - 2] != '}' ||
json_buffer->buf.ptr.text[json_buffer->json_string_length - 1] != '\n')
if (json_buffer->buf.ptr.text[json_buffer->json_message_length - 2] != '}' ||
json_buffer->buf.ptr.text[json_buffer->json_message_length - 1] != '\n')
{
return PARSE_INVALID_CLOSING_CHAR;
}
jsmn_init(&jsmn->parser);
jsmn->tokens_found = jsmn_parse(&jsmn->parser,
json_buffer->buf.ptr.text + json_buffer->json_string_start,
json_buffer->json_string_length - json_buffer->json_string_start,
json_buffer->buf.ptr.text + json_buffer->json_message_start,
json_buffer->json_message_length - json_buffer->json_message_start,
jsmn->tokens,
nDPIsrvd_MAX_JSON_TOKENS);
if (jsmn->tokens_found < 0 || jsmn->tokens[0].type != JSMN_OBJECT)
@@ -1685,7 +1696,7 @@ static inline int nDPIsrvd_json_buffer_length(struct nDPIsrvd_socket const * con
return 0;
}
return (int)sock->buffer.json_string_length - NETWORK_BUFFER_LENGTH_DIGITS;
return (int)sock->buffer.json_message_length - NETWORK_BUFFER_LENGTH_DIGITS;
}
static inline char const * nDPIsrvd_json_buffer_string(struct nDPIsrvd_socket const * const sock)
@@ -1695,7 +1706,7 @@ static inline char const * nDPIsrvd_json_buffer_string(struct nDPIsrvd_socket co
return NULL;
}
return sock->buffer.json_string;
return sock->buffer.json_message;
}
#endif

View File

@@ -63,6 +63,10 @@ class TermColor:
global USE_COLORAMA
USE_COLORAMA = False
@staticmethod
def disableBlink():
TermColor.BLINK = ''
@staticmethod
def calcColorHash(string):
h = 0
@@ -83,7 +87,6 @@ class TermColor:
global USE_COLORAMA
if USE_COLORAMA is True:
fg_color, bg_color = TermColor.getColorsByHash(string)
color_hash = TermColor.calcColorHash(string)
return '{}{}{}{}{}'.format(Style.BRIGHT, fg_color, bg_color, string, Style.RESET_ALL)
else:
return '{}{}{}'.format(TermColor.BOLD, string, TermColor.END)
@@ -295,6 +298,7 @@ class nDPIsrvdException(Exception):
INVALID_LINE_RECEIVED = 4
CALLBACK_RETURNED_FALSE = 5
SOCKET_TIMEOUT = 6
JSON_DECODE_ERROR = 7
def __init__(self, etype):
self.etype = etype
@@ -341,6 +345,14 @@ class SocketTimeout(nDPIsrvdException):
def __str__(self):
return 'Socket timeout.'
class JsonDecodeError(nDPIsrvdException):
def __init__(self, json_exception, failed_line):
super().__init__(nDPIsrvdException.JSON_DECODE_ERROR)
self.json_exception = json_exception
self.failed_line = failed_line
def __str__(self):
return '{}: {}'.format(self.json_exception, self.failed_line)
class JsonFilter():
def __init__(self, filter_string):
self.filter_string = filter_string
@@ -456,7 +468,7 @@ class nDPIsrvdSocket:
json_dict = dict()
self.failed_lines += [received_line]
self.lines = self.lines[1:]
raise(e)
raise JsonDecodeError(e, received_line)
instance = self.flow_mgr.getInstance(json_dict)
if instance is None:
@@ -522,7 +534,8 @@ def defaultArgumentParser(desc='nDPIsrvd Python Interface', enable_json_filter=F
parser.add_argument('--unix', type=str, help='nDPIsrvd unix socket path')
if enable_json_filter is True:
parser.add_argument('--filter', type=str, action='append',
help='Set a filter string which if evaluates to True will invoke the JSON callback.')
help='Set a filter string which if evaluates to True will invoke the JSON callback.\n'
'Example: json_dict[\'flow_event_name\'] == \'detected\' will only process \'detected\' events.')
return parser
def toSeconds(usec):

View File

@@ -0,0 +1,41 @@
name: build # This name shows up in badge.svg
on:
push: # any branch
pull_request:
branches: [ "master" ]
jobs:
build-gcc:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- run: make -C tests EXTRA_CFLAGS="-W -Wall -Wextra -Wswitch-default"
- run: make -C tests clean ; make -C tests pedantic
- run: make -C tests clean ; make -C tests pedantic EXTRA_CFLAGS=-DNO_DECLTYPE
- run: make -C tests clean ; make -C tests cplusplus
- run: make -C tests clean ; make -C tests cplusplus EXTRA_CFLAGS=-DNO_DECLTYPE
build-clang:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
env:
CC: clang
CXX: clang++
steps:
- uses: actions/checkout@v3
- run: make -C tests EXTRA_CFLAGS="-W -Wall -Wextra -Wswitch-default"
- run: make -C tests clean ; make -C tests pedantic
- run: make -C tests clean ; make -C tests pedantic EXTRA_CFLAGS=-DNO_DECLTYPE
- run: make -C tests clean ; make -C tests cplusplus
- run: make -C tests clean ; make -C tests cplusplus EXTRA_CFLAGS=-DNO_DECLTYPE
build-asciidoc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: sudo apt-get update && sudo apt-get install asciidoc -y
- run: make -C doc

View File

@@ -1,4 +1,4 @@
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2005-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,5 +1,6 @@
[![Build status](https://api.travis-ci.org/troydhanson/uthash.svg?branch=master)](https://travis-ci.org/troydhanson/uthash)
[![GitHub CI status](https://github.com/troydhanson/uthash/actions/workflows/build.yml/badge.svg)](https://github.com/troydhanson/uthash/actions/workflows/build.yml)
Documentation for uthash is available at:

View File

@@ -13,8 +13,8 @@
</div> <!-- banner -->
<div id="topnav">
<a href="http://github.com/troydhanson/uthash">GitHub page</a> &gt;
uthash home <!-- http://troydhanson.github.io/uthash/ -->
<a href="https://github.com/troydhanson/uthash">GitHub page</a> &gt;
uthash home <!-- https://troydhanson.github.io/uthash/ -->
<a href="https://twitter.com/share" class="twitter-share-button" data-via="troydhanson">Tweet</a>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
@@ -43,7 +43,7 @@
<h2>developer</h2>
<div><a href="http://troydhanson.github.io/">Troy D. Hanson</a></div>
<div><a href="https://troydhanson.github.io/">Troy D. Hanson</a></div>
<h2>maintainer</h2>
<div><a href="https://github.com/Quuxplusone">Arthur O'Dwyer</a></div>

View File

@@ -13,7 +13,7 @@
</div> <!-- banner -->
<div id="topnav">
<a href="http://troydhanson.github.io/uthash/">uthash home</a> &gt;
<a href="https://troydhanson.github.io/uthash/">uthash home</a> &gt;
BSD license
</div>
@@ -21,7 +21,7 @@
<div id="mid">
<div id="main">
<pre>
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2005-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -5,7 +5,7 @@ v2.3.0, February 2021
To download uthash, follow this link back to the
https://github.com/troydhanson/uthash[GitHub project page].
Back to my http://troydhanson.github.io/[other projects].
Back to my https://troydhanson.github.io/[other projects].
A hash in C
-----------
@@ -805,7 +805,7 @@ Here is a simple example where a structure has a pointer member, called `key`.
.A pointer key
----------------------------------------------------------------------
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include "uthash.h"
@@ -816,17 +816,16 @@ typedef struct {
} el_t;
el_t *hash = NULL;
char *someaddr = NULL;
void *someaddr = &hash;
int main() {
el_t *d;
el_t *e = (el_t *)malloc(sizeof *e);
if (!e) return -1;
e->key = (void*)someaddr;
e->key = someaddr;
e->i = 1;
HASH_ADD_PTR(hash, key, e);
HASH_FIND_PTR(hash, &someaddr, d);
if (d) printf("found\n");
assert(d == e);
/* release memory */
HASH_DEL(hash, e);
@@ -835,9 +834,7 @@ int main() {
}
----------------------------------------------------------------------
This example is included in `tests/test57.c`. Note that the end of the program
deletes the element out of the hash, (and since no more elements remain in the
hash), uthash releases its internal memory.
This example is included in `tests/test57.c`.
Structure keys
~~~~~~~~~~~~~~
@@ -893,7 +890,7 @@ int main(int argc, char *argv[]) {
----------------------------------------------------------------------
This usage is nearly the same as use of a compound key explained below.
This usage is nearly the same as the usage of a compound key explained below.
Note that the general macros require the name of the `UT_hash_handle` to be
passed as the first argument (here, this is `hh`). The general macros are
@@ -1153,17 +1150,16 @@ always used with the `users_by_name` hash table).
Sorted insertion of new items
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you would like to maintain a sorted hash you have two options. The first
option is to use the HASH_SRT() macro, which will sort any unordered list in
To maintain a sorted hash, you have two options. Your first
option is to use the `HASH_SRT` macro, which will sort any unordered list in
'O(n log(n))'. This is the best strategy if you're just filling up a hash
table with items in random order with a single final HASH_SRT() operation
when all is done. Obviously, this won't do what you want if you need
the list to be in an ordered state at times between insertion of
items. You can use HASH_SRT() after every insertion operation, but that will
yield a computational complexity of 'O(n^2 log n)'.
table with items in random order with a single final `HASH_SRT` operation
when all is done. If you need the table to remain sorted as you add and remove
items, you can use `HASH_SRT` after every insertion operation, but that gives
a computational complexity of 'O(n^2 log n)' to insert 'n' items.
The second route you can take is via the in-order add and replace macros.
The `HASH_ADD_INORDER*` macros work just like their `HASH_ADD*` counterparts, but
Your second option is to use the in-order add and replace macros.
The `HASH_ADD_*_INORDER` macros work just like their `HASH_ADD_*` counterparts, but
with an additional comparison-function argument:
int name_sort(struct my_struct *a, struct my_struct *b) {
@@ -1172,11 +1168,11 @@ with an additional comparison-function argument:
HASH_ADD_KEYPTR_INORDER(hh, items, &item->name, strlen(item->name), item, name_sort);
New items are sorted at insertion time in 'O(n)', thus resulting in a
total computational complexity of 'O(n^2)' for the creation of the hash
table with all items.
For in-order add to work, the list must be in an ordered state before
insertion of the new item.
These macros assume that the hash is already sorted according to the
comparison function, and insert the new item in its proper place.
A single insertion takes 'O(n)', resulting in a total computational
complexity of 'O(n^2)' to insert all 'n' items: slower than a single
`HASH_SRT`, but faster than doing a `HASH_SRT` after every insertion.
Several sort orders
~~~~~~~~~~~~~~~~~~~

View File

@@ -139,7 +139,7 @@ a copy of the source string and pushes that copy into the array.
About UT_icd
~~~~~~~~~~~~
Arrays be made of any type of element, not just integers and strings. The
Arrays can be made of any type of element, not just integers and strings. The
elements can be basic types or structures. Unless you're dealing with integers
and strings (which use pre-defined `ut_int_icd` and `ut_str_icd`), you'll need
to define a `UT_icd` helper structure. This structure contains everything that

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2008-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2008-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -38,11 +38,6 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#define UTARRAY_UNUSED
#endif
#ifdef oom
#error "The name of macro 'oom' has been changed to 'utarray_oom'. Please update your code."
#define utarray_oom() oom()
#endif
#ifndef utarray_oom
#define utarray_oom() exit(-1)
#endif
@@ -234,7 +229,16 @@ typedef struct {
static void utarray_str_cpy(void *dst, const void *src) {
char *const *srcc = (char *const *)src;
char **dstc = (char**)dst;
*dstc = (*srcc == NULL) ? NULL : strdup(*srcc);
if (*srcc == NULL) {
*dstc = NULL;
} else {
*dstc = (char*)malloc(strlen(*srcc) + 1);
if (*dstc == NULL) {
utarray_oom();
} else {
strcpy(*dstc, *srcc);
}
}
}
static void utarray_str_dtor(void *elt) {
char **eltc = (char**)elt;

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2003-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2003-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -51,6 +51,8 @@ typedef unsigned char uint8_t;
#else /* VS2008 or older (or VS2010 in C mode) */
#define NO_DECLTYPE
#endif
#elif defined(__MCST__) /* Elbrus C Compiler */
#define DECLTYPE(x) (__typeof(x))
#elif defined(__BORLANDC__) || defined(__ICCARM__) || defined(__LCC__) || defined(__WATCOMC__)
#define NO_DECLTYPE
#else /* GNU, Sun and other compilers */
@@ -157,7 +159,7 @@ do {
if (head) { \
unsigned _hf_bkt; \
HASH_TO_BKT(hashval, (head)->hh.tbl->num_buckets, _hf_bkt); \
if (HASH_BLOOM_TEST((head)->hh.tbl, hashval) != 0) { \
if (HASH_BLOOM_TEST((head)->hh.tbl, hashval)) { \
HASH_FIND_IN_BKT((head)->hh.tbl, hh, (head)->hh.tbl->buckets[ _hf_bkt ], keyptr, keylen, hashval, out); \
} \
} \
@@ -194,7 +196,7 @@ do {
} while (0)
#define HASH_BLOOM_BITSET(bv,idx) (bv[(idx)/8U] |= (1U << ((idx)%8U)))
#define HASH_BLOOM_BITTEST(bv,idx) (bv[(idx)/8U] & (1U << ((idx)%8U)))
#define HASH_BLOOM_BITTEST(bv,idx) ((bv[(idx)/8U] & (1U << ((idx)%8U))) != 0)
#define HASH_BLOOM_ADD(tbl,hashv) \
HASH_BLOOM_BITSET((tbl)->bloom_bv, ((hashv) & (uint32_t)((1UL << (tbl)->bloom_nbits) - 1U)))
@@ -206,7 +208,7 @@ do {
#define HASH_BLOOM_MAKE(tbl,oomed)
#define HASH_BLOOM_FREE(tbl)
#define HASH_BLOOM_ADD(tbl,hashv)
#define HASH_BLOOM_TEST(tbl,hashv) (1)
#define HASH_BLOOM_TEST(tbl,hashv) 1
#define HASH_BLOOM_BYTELEN 0U
#endif
@@ -450,7 +452,7 @@ do {
#define HASH_DELETE_HH(hh,head,delptrhh) \
do { \
struct UT_hash_handle *_hd_hh_del = (delptrhh); \
const struct UT_hash_handle *_hd_hh_del = (delptrhh); \
if ((_hd_hh_del->prev == NULL) && (_hd_hh_del->next == NULL)) { \
HASH_BLOOM_FREE((head)->hh.tbl); \
uthash_free((head)->hh.tbl->buckets, \
@@ -593,7 +595,9 @@ do {
/* SAX/FNV/OAT/JEN hash functions are macro variants of those listed at
* http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx */
* http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx
* (archive link: https://archive.is/Ivcan )
*/
#define HASH_SAX(key,keylen,hashv) \
do { \
unsigned _sx_i; \

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2007-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2007-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -70,6 +70,8 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#else /* VS2008 or older (or VS2010 in C mode) */
#define NO_DECLTYPE
#endif
#elif defined(__MCST__) /* Elbrus C Compiler */
#define LDECLTYPE(x) __typeof(x)
#elif defined(__BORLANDC__) || defined(__ICCARM__) || defined(__LCC__) || defined(__WATCOMC__)
#define NO_DECLTYPE
#else /* GNU, Sun and other compilers */
@@ -709,7 +711,8 @@ do {
assert((del)->prev != NULL); \
if ((del)->prev == (del)) { \
(head)=NULL; \
} else if ((del)==(head)) { \
} else if ((del) == (head)) { \
assert((del)->next != NULL); \
(del)->next->prev = (del)->prev; \
(head) = (del)->next; \
} else { \

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2015-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2015-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2018-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2018-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2008-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2008-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -12,7 +12,7 @@ PROGS = test1 test2 test3 test4 test5 test6 test7 test8 test9 \
test66 test67 test68 test69 test70 test71 test72 test73 \
test74 test75 test76 test77 test78 test79 test80 test81 \
test82 test83 test84 test85 test86 test87 test88 test89 \
test90 test91 test92 test93 test94 test95 test96
test90 test91 test92 test93 test94 test95 test96 test97
CFLAGS += -I$(HASHDIR)
#CFLAGS += -DHASH_BLOOM=16
#CFLAGS += -O2

View File

@@ -98,6 +98,7 @@ test93: alt_fatal
test94: utlist with fields named other than 'next' and 'prev'
test95: utstack
test96: HASH_FUNCTION + HASH_KEYCMP
test97: deleting a const-qualified node from a hash
Other Make targets
================================================================================

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
Copyright (c) 2005-2022, Troy D. Hanson https://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1 +0,0 @@
found

View File

@@ -1,5 +1,5 @@
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <stddef.h>
#include "uthash.h"
typedef struct {
@@ -8,25 +8,46 @@ typedef struct {
UT_hash_handle hh;
} el_t;
el_t *findit(el_t *hash, void *keytofind)
{
el_t *found;
HASH_FIND_PTR(hash, &keytofind, found);
return found;
}
int main()
{
el_t *d;
el_t *hash = NULL;
char *someaddr = NULL;
el_t *e = (el_t*)malloc(sizeof(el_t));
if (!e) {
return -1;
}
e->key = (void*)someaddr;
e->i = 1;
HASH_ADD_PTR(hash,key,e);
HASH_FIND_PTR(hash, &someaddr, d);
if (d != NULL) {
printf("found\n");
}
el_t e1;
el_t e2;
e1.key = NULL;
e1.i = 1;
e2.key = &e2;
e2.i = 2;
assert(findit(hash, NULL) == NULL);
assert(findit(hash, &e1) == NULL);
assert(findit(hash, &e2) == NULL);
HASH_ADD_PTR(hash, key, &e1);
assert(findit(hash, NULL) == &e1);
assert(findit(hash, &e1) == NULL);
assert(findit(hash, &e2) == NULL);
HASH_ADD_PTR(hash, key, &e2);
assert(findit(hash, NULL) == &e1);
assert(findit(hash, &e1) == NULL);
assert(findit(hash, &e2) == &e2);
HASH_DEL(hash, &e1);
assert(findit(hash, NULL) == NULL);
assert(findit(hash, &e1) == NULL);
assert(findit(hash, &e2) == &e2);
HASH_CLEAR(hh, hash);
assert(hash == NULL);
/* release memory */
HASH_DEL(hash,e);
free(e);
return 0;
}

View File

@@ -3,7 +3,7 @@
#include "uthash.h"
// this is an example of how to do a LRU cache in C using uthash
// http://troydhanson.github.io/uthash/
// https://troydhanson.github.io/uthash/
// by Jehiah Czebotar 2011 - jehiah@gmail.com
// this code is in the public domain http://unlicense.org/

57
dependencies/uthash/tests/test97.c vendored Normal file
View File

@@ -0,0 +1,57 @@
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include "uthash.h"
struct item {
int payload;
UT_hash_handle hh;
};
void delete_without_modifying(struct item *head, const struct item *p)
{
struct item old;
memcpy(&old, p, sizeof(struct item)); // also copy the padding bits
assert(memcmp(&old, p, sizeof(struct item)) == 0);
assert(p->hh.tbl == head->hh.tbl); // class invariant
HASH_DEL(head, p);
assert(memcmp(&old, p, sizeof(struct item)) == 0); // unmodified by HASH_DEL
}
int main()
{
struct item *items = NULL;
struct item *found = NULL;
int fortytwo = 42;
int i;
for (i=0; i < 100; i++) {
struct item *p = (struct item *)malloc(sizeof *p);
p->payload = i;
HASH_ADD_INT(items, payload, p);
}
assert(HASH_COUNT(items) == 100);
// Delete item "42" from the hash, wherever it is.
HASH_FIND_INT(items, &fortytwo, found);
assert(found != NULL);
assert(found->payload == 42);
delete_without_modifying(items, found);
assert(HASH_COUNT(items) == 99);
HASH_FIND_INT(items, &fortytwo, found);
assert(found == NULL);
// Delete the very first item in the hash.
assert(items != NULL);
i = items->payload;
delete_without_modifying(items, items);
assert(HASH_COUNT(items) == 98);
HASH_FIND_INT(items, &i, found);
assert(found == NULL);
// leak the items, we don't care
return 0;
}

View File

@@ -20,14 +20,17 @@ Used also by `tests/run_tests.sh` if available.
A collecd-exec compatible middleware that gathers statistic values from nDPId.
Used also by `tests/run_tests.sh` if available.
## c-influxd
An InfluxDB push daemon. It aggregates various statistics gathered from nDPId.
The results are sent to a specified InfluxDB endpoint.
![](ndpid_grafana_example.png)
## c-notifyd
A notification daemon that sends information about suspicious flow events to DBUS.
## c-json-stdout
Tiny nDPId json dumper. Does not provide any useful funcationality besides dumping parsed JSON objects.
## c-simple
Integration example that verifies flow timeouts on SIGUSR1.
@@ -61,15 +64,10 @@ Use sklearn together with CSVs created with **c-analysed** to train and predict
Try it with: `./examples/py-machine-learning/sklearn_random_forest.py --csv ./ndpi-analysed.csv --proto-class tls.youtube --proto-class tls.github --proto-class tls.spotify --proto-class tls.facebook --proto-class tls.instagram --proto-class tls.doh_dot --proto-class quic --proto-class icmp`
This way you should get 9 different classification classes.
You may notice that some classes e.g. TLS protocol classifications may have a higher false-negative rate.
You may notice that some classes e.g. TLS protocol classifications have a higher false-negative/false-positive rate.
Unfortunately, I can not provide any datasets due to some privacy concerns.
But you can use a [pre-trained model](https://drive.google.com/file/d/1KEwbP-Gx7KJr54wNoa63I56VI4USCAPL/view?usp=sharing) with `--load-model`.
## py-flow-dashboard
A realtime web based graph using Plotly/Dash.
Probably the most informative example.
But you may use a [pre-trained model](https://drive.google.com/file/d/1KEwbP-Gx7KJr54wNoa63I56VI4USCAPL/view?usp=sharing) with `--load-model`.
## py-flow-multiprocess
@@ -81,11 +79,20 @@ Dump received and parsed JSON objects.
## py-schema-validation
Validate nDPId JSON strings against pre-defined JSON schema's.
Validate nDPId JSON messages against pre-defined JSON schema's.
See `schema/`.
Required by `tests/run_tests.sh`
## py-semantic-validation
Validate nDPId JSON strings against internal event semantics.
Validate nDPId JSON messages against internal event semantics.
Required by `tests/run_tests.sh`
## rs-simple
A straight forward Rust deserialization/parsing example.
## yaml-filebeat
An example filebeat configuration to parse and send nDPId JSON
messages to Elasticsearch. Allowing long term storage and data visualization with kibana
and various other tools that interact with Elasticsearch (No logstash required).

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@
#include "utarray.h"
#include "utils.h"
//#define VERBOSE
// #define VERBOSE
#define DEFAULT_DATADIR "/tmp/nDPId-captured"
struct packet_data
@@ -63,11 +63,12 @@ struct global_user_data
struct flow_user_data
{
uint8_t detection_finished;
uint8_t guessed;
uint8_t detected;
uint8_t risky;
uint8_t midstream;
uint8_t new_seen : 1;
uint8_t detection_finished : 1;
uint8_t guessed : 1;
uint8_t detected : 1;
uint8_t risky : 1;
uint8_t midstream : 1;
nDPIsrvd_ull flow_datalink;
nDPIsrvd_ull flow_max_packets;
nDPIsrvd_ull flow_tot_l4_payload_len;
@@ -196,7 +197,7 @@ static void decode_base64(pcap_dumper_t * const pd,
static void packet_data_copy(void * dst, const void * src)
{
struct packet_data * const pd_dst = (struct packet_data *)dst;
struct packet_data const * const pd_src = (struct packet_data *)src;
struct packet_data const * const pd_src = (struct packet_data const *)src;
*pd_dst = *pd_src;
if (pd_src->base64_packet != NULL && pd_src->base64_packet_size > 0)
{
@@ -223,7 +224,7 @@ static void packet_data_dtor(void * elt)
static void flow_packet_data_copy(void * dst, const void * src)
{
struct flow_packet_data * const pd_dst = (struct flow_packet_data *)dst;
struct flow_packet_data const * const pd_src = (struct flow_packet_data *)src;
struct flow_packet_data const * const pd_src = (struct flow_packet_data const *)src;
*pd_dst = *pd_src;
if (pd_src->base64_packet != NULL && pd_src->base64_packet_size > 0)
{
@@ -523,7 +524,7 @@ static int packet_write_pcap_file(struct global_user_data const * const global_u
decode_base64(pd, pd_elt_dmp, NULL);
}
#ifdef VERBOSE
printf("packets dumped to %s\n", pcap_filename);
printf("packets dumped to %s\n", filename);
#endif
pcap_dump_close(pd);
pcap_close(p);
@@ -775,7 +776,7 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
{
logger(1, "%s", "No packet data available.");
logger(1,
"JSON String: '%.*s'",
"JSON message: '%.*s'",
nDPIsrvd_json_buffer_length(sock),
nDPIsrvd_json_buffer_string(sock));
return CALLBACK_OK;
@@ -828,7 +829,7 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
if (pkt == NULL)
{
logger(1, "%s", "No packet data available.");
logger(1, "JSON String: '%.*s'", nDPIsrvd_json_buffer_length(sock), nDPIsrvd_json_buffer_string(sock));
logger(1, "JSON message: '%.*s'", nDPIsrvd_json_buffer_length(sock), nDPIsrvd_json_buffer_string(sock));
return CALLBACK_OK;
}
@@ -876,6 +877,8 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "new") != 0)
{
flow_user->new_seen = 1;
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_datalink"), &flow_user->flow_datalink),
"flow_datalink");
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_max_packets"), &flow_user->flow_max_packets),
@@ -887,14 +890,9 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
return CALLBACK_OK;
}
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "end") != 0)
else if (flow_user->new_seen == 0)
{
struct nDPIsrvd_json_token const * const ndpi_proto = TOKEN_GET_SZ(sock, "ndpi", "proto");
if (ndpi_proto != NULL)
{
flow_user->detected = 1;
}
return CALLBACK_OK;
}
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "guessed") != 0)
{
@@ -903,19 +901,16 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "not-detected") != 0)
{
flow_user->detected = 0;
flow_user->detection_finished = 1;
}
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detected") != 0 ||
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detection-update") != 0 ||
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "update") != 0)
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detection-update"))
{
struct nDPIsrvd_json_token const * const flow_risk = TOKEN_GET_SZ(sock, "ndpi", "flow_risk");
struct nDPIsrvd_json_token const * current = NULL;
int next_child_index = -1;
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "update") == 0)
{
flow_user->detected = 1;
}
flow_user->detected = 1;
if (flow_risk != NULL)
{
@@ -926,7 +921,6 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
if (str_value_to_ull(TOKEN_GET_KEY(sock, current, NULL), &numeric_risk_value) == CONVERSION_OK &&
numeric_risk_value < NDPI_MAX_RISK && has_ndpi_risk(&process_risky, numeric_risk_value) != 0)
{
flow_user->detected = 1;
flow_user->risky = 1;
}
}
@@ -938,6 +932,11 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
(flow_user->detected == 0 && process_undetected != 0) || (flow_user->risky != 0 && process_risky != 0) ||
(flow_user->midstream != 0 && process_midstream != 0)))
{
if (flow_user->guessed != 0 && flow_user->detected != 0)
{
log_event(sock, flow, "BUG: guessed and detected at the same time");
}
if (logging_mode != 0)
{
if (flow_user->guessed != 0)
@@ -954,7 +953,7 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
{
if (capture_mode != 0)
{
logger(0, "Flow %llu: No packets captured.", flow->id_as_ull);
log_event(sock, flow, "No packets captured");
}
}
else if (capture_mode != 0)
@@ -965,15 +964,16 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
char pcap_filename[PATH_MAX];
if (flow_generate_pcap_filename(flow_user, pcap_filename, sizeof(pcap_filename)) == NULL)
{
logger(1, "%s", "Internal error. Could not generate PCAP filename, exit ..");
log_event(sock, flow, "Internal error. Could not generate PCAP filename, exit ..");
return CALLBACK_ERROR;
}
#ifdef VERBOSE
printf("Flow %llu saved to %s\n", flow->id_as_ull, pcap_filename);
#endif
errno = 0;
if (flow_write_pcap_file(flow_user, pcap_filename) != 0)
{
logger(1, "Could not dump packet data to pcap file %s", pcap_filename);
logger(1, "Could not dump packet data to pcap file %s: %s", pcap_filename, strerror(errno));
return CALLBACK_OK;
}
}
@@ -1175,7 +1175,7 @@ static int parse_options(int argc, char ** argv)
break;
case 'R':
{
char * value = (optarg[0] == '~' ? optarg + 1 : optarg);
char const * const value = (optarg[0] == '~' ? optarg + 1 : optarg);
nDPIsrvd_ull risk;
if (perror_ull(str_value_to_ull(value, &risk), "process_risky") != CONVERSION_OK)
{
@@ -1300,7 +1300,7 @@ static int mainloop(void)
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(ndpisrvd_socket);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
logger(1, "Could not parse json string: %s", nDPIsrvd_enum_to_string(parse_ret));
logger(1, "Could not parse json message: %s", nDPIsrvd_enum_to_string(parse_ret));
break;
}
}
@@ -1318,12 +1318,12 @@ int main(int argc, char ** argv)
init_logging("nDPIsrvd-captured");
ndpisrvd_socket = nDPIsrvd_socket_init(sizeof(struct global_user_data),
0,
0,
sizeof(struct flow_user_data),
captured_json_callback,
NULL,
captured_flow_cleanup_callback);
0,
0,
sizeof(struct flow_user_data),
captured_json_callback,
NULL,
captured_flow_cleanup_callback);
if (ndpisrvd_socket == NULL)
{
fprintf(stderr, "%s: nDPIsrvd socket memory allocation failed!\n", argv[0]);
@@ -1355,8 +1355,18 @@ int main(int argc, char ** argv)
return 1;
}
if (capture_mode != 0)
{
int ret = chmod_chown(datadir, S_IRWXU | S_IRGRP | S_IXGRP, user, group);
if (ret != 0)
{
logger(1, "Could not chmod/chown `%s': %s", datadir, strerror(ret));
return 1;
}
}
errno = 0;
if (user != NULL && change_user_group(user, group, pidfile, datadir /* :D */, NULL) != 0)
if (user != NULL && change_user_group(user, group, pidfile) != 0)
{
if (errno != 0)
{
@@ -1368,10 +1378,6 @@ int main(int argc, char ** argv)
}
return 1;
}
if (datadir != NULL)
{
chmod(datadir, S_IRWXU);
}
if (nDPIsrvd_set_read_timeout(ndpisrvd_socket, 180, 0) != 0)
{

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@
RRDDIR="${1}"
OUTDIR="${2}"
RRDARGS="--width=800 --height=400"
REQUIRED_RRDCNT=106
REQUIRED_RRDCNT=130
if [ -z "${RRDDIR}" ]; then
printf '%s: Missing RRD directory which contains nDPIsrvd/Collectd files.\n' "${0}"
@@ -62,25 +62,17 @@ rrdtool_graph() {
}
rrdtool_graph Flows Amount "${OUTDIR}/flows" \
DEF:flows_new=${RRDDIR}/gauge-flow_new_count.rrd:value:AVERAGE \
DEF:flows_end=${RRDDIR}/gauge-flow_end_count.rrd:value:AVERAGE \
DEF:flows_idle=${RRDDIR}/gauge-flow_idle_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data flows_new) \
AREA:flows_new#54EC48::STACK \
AREA:flows_end#ECD748::STACK \
AREA:flows_idle#EC9D48::STACK \
LINE2:flows_new#24BC14:"New." \
$(rrdtool_graph_print_cur_min_max_avg flows_new) \
LINE2:flows_end#C9B215:"End." \
$(rrdtool_graph_print_cur_min_max_avg flows_end) \
LINE2:flows_idle#CC7016:"Idle" \
$(rrdtool_graph_print_cur_min_max_avg flows_idle)
DEF:flows_active=${RRDDIR}/gauge-flow_active_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data flows_active) \
AREA:flows_active#54EC48::STACK \
LINE2:flows_active#24BC14:"Active" \
$(rrdtool_graph_print_cur_min_max_avg flows_active)
rrdtool_graph Detections Amount "${OUTDIR}/detections" \
DEF:flows_detected=${RRDDIR}/gauge-flow_detected_count.rrd:value:AVERAGE \
DEF:flows_guessed=${RRDDIR}/gauge-flow_guessed_count.rrd:value:AVERAGE \
DEF:flows_not_detected=${RRDDIR}/gauge-flow_not_detected_count.rrd:value:AVERAGE \
DEF:flows_detection_update=${RRDDIR}/gauge-flow_detection_update_count.rrd:value:AVERAGE \
DEF:flows_risky=${RRDDIR}/gauge-flow_risky_count.rrd:value:AVERAGE \
DEF:flows_detection_update=${RRDDIR}/counter-flow_detection_update_count.rrd:value:AVERAGE \
DEF:flows_risky=${RRDDIR}/counter-flow_risky_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data flows_detected) \
AREA:flows_detected#00bfff::STACK \
AREA:flows_detection_update#a1b8c4::STACK \
@@ -98,8 +90,8 @@ rrdtool_graph Detections Amount "${OUTDIR}/detections" \
LINE2:flows_risky#b32d00:"Risky..........." \
$(rrdtool_graph_print_cur_min_max_avg flows_risky)
rrdtool_graph "Traffic (IN/OUT)" Bytes "${OUTDIR}/traffic" \
DEF:total_src_bytes=${RRDDIR}/gauge-flow_src_total_bytes.rrd:value:AVERAGE \
DEF:total_dst_bytes=${RRDDIR}/gauge-flow_dst_total_bytes.rrd:value:AVERAGE \
DEF:total_src_bytes=${RRDDIR}/counter-flow_src_total_bytes.rrd:value:AVERAGE \
DEF:total_dst_bytes=${RRDDIR}/counter-flow_dst_total_bytes.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data total_src_bytes) \
AREA:total_src_bytes#00cc99:"Total-Bytes-Source2Dest":STACK \
$(rrdtool_graph_print_cur_min_max_avg total_src_bytes) \
@@ -137,7 +129,45 @@ rrdtool_graph Layer4-Flows Amount "${OUTDIR}/layer4" \
$(rrdtool_graph_print_cur_min_max_avg layer4_icmp) \
LINE2:layer4_other#83588d:"Other" \
$(rrdtool_graph_print_cur_min_max_avg layer4_other)
rrdtool_graph Flow-Breeds Amount "${OUTDIR}/breed" \
rrdtool_graph Confidence Amount "${OUTDIR}/confidence" \
DEF:conf_ip=${RRDDIR}/gauge-flow_confidence_by_ip.rrd:value:AVERAGE \
DEF:conf_port=${RRDDIR}/gauge-flow_confidence_by_port.rrd:value:AVERAGE \
DEF:conf_aggr=${RRDDIR}/gauge-flow_confidence_dpi_aggressive.rrd:value:AVERAGE \
DEF:conf_cache=${RRDDIR}/gauge-flow_confidence_dpi_cache.rrd:value:AVERAGE \
DEF:conf_pcache=${RRDDIR}/gauge-flow_confidence_dpi_partial_cache.rrd:value:AVERAGE \
DEF:conf_part=${RRDDIR}/gauge-flow_confidence_dpi_partial.rrd:value:AVERAGE \
DEF:conf_dpi=${RRDDIR}/gauge-flow_confidence_dpi.rrd:value:AVERAGE \
DEF:conf_nbpf=${RRDDIR}/gauge-flow_confidence_nbpf.rrd:value:AVERAGE \
DEF:conf_ukn=${RRDDIR}/gauge-flow_confidence_unknown.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data conf_ip) \
AREA:conf_ip#4dff4d::STACK \
AREA:conf_port#c2ff33::STACK \
AREA:conf_aggr#ffe433::STACK \
AREA:conf_cache#ffb133::STACK \
AREA:conf_pcache#ff5f33::STACK \
AREA:conf_part#e74b5b::STACK \
AREA:conf_dpi#a5aca0::STACK \
AREA:conf_nbpf#d7c1cc::STACK \
AREA:conf_ukn#ddccbb::STACK \
LINE2:conf_ip#00e600:"By-IP................" \
$(rrdtool_graph_print_cur_min_max_avg conf_ip) \
LINE2:conf_port#8fce00:"By-Port.............." \
$(rrdtool_graph_print_cur_min_max_avg conf_port) \
LINE2:conf_aggr#e6c700:"DPI-Aggressive......." \
$(rrdtool_graph_print_cur_min_max_avg conf_aggr) \
LINE2:conf_cache#e68e00:"DPI-Cache............" \
$(rrdtool_graph_print_cur_min_max_avg conf_cache) \
LINE2:conf_pcache#e63200:"DPI-Partial-Cache...." \
$(rrdtool_graph_print_cur_min_max_avg conf_pcache) \
LINE2:conf_part#c61b2b:"DPI-Partial.........." \
$(rrdtool_graph_print_cur_min_max_avg conf_part) \
LINE2:conf_dpi#7e8877:"DPI.................." \
$(rrdtool_graph_print_cur_min_max_avg conf_dpi) \
LINE2:conf_nbpf#ae849a:"nBPF................." \
$(rrdtool_graph_print_cur_min_max_avg conf_nbpf) \
LINE2:conf_ukn#aa9988:"Unknown.............." \
$(rrdtool_graph_print_cur_min_max_avg conf_ukn)
rrdtool_graph Breeds Amount "${OUTDIR}/breed" \
DEF:breed_safe=${RRDDIR}/gauge-flow_breed_safe_count.rrd:value:AVERAGE \
DEF:breed_acceptable=${RRDDIR}/gauge-flow_breed_acceptable_count.rrd:value:AVERAGE \
DEF:breed_fun=${RRDDIR}/gauge-flow_breed_fun_count.rrd:value:AVERAGE \
@@ -171,17 +201,22 @@ rrdtool_graph Flow-Breeds Amount "${OUTDIR}/breed" \
$(rrdtool_graph_print_cur_min_max_avg breed_unrated) \
LINE2:breed_unknown#ae849a:"Unknown.............." \
$(rrdtool_graph_print_cur_min_max_avg breed_unknown)
rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
rrdtool_graph Categories 'Amount(SUM)' "${OUTDIR}/categories" \
DEF:cat_adlt=${RRDDIR}/gauge-flow_category_adult_content_count.rrd:value:AVERAGE \
DEF:cat_ads=${RRDDIR}/gauge-flow_category_advertisment_count.rrd:value:AVERAGE \
DEF:cat_chat=${RRDDIR}/gauge-flow_category_chat_count.rrd:value:AVERAGE \
DEF:cat_cloud=${RRDDIR}/gauge-flow_category_cloud_count.rrd:value:AVERAGE \
DEF:cat_collab=${RRDDIR}/gauge-flow_category_collaborative_count.rrd:value:AVERAGE \
DEF:cat_conn=${RRDDIR}/gauge-flow_category_conn_check_count.rrd:value:AVERAGE \
DEF:cat_cybr=${RRDDIR}/gauge-flow_category_cybersecurity_count.rrd:value:AVERAGE \
DEF:cat_xfer=${RRDDIR}/gauge-flow_category_data_transfer_count.rrd:value:AVERAGE \
DEF:cat_db=${RRDDIR}/gauge-flow_category_database_count.rrd:value:AVERAGE \
DEF:cat_dl=${RRDDIR}/gauge-flow_category_download_count.rrd:value:AVERAGE \
DEF:cat_mail=${RRDDIR}/gauge-flow_category_email_count.rrd:value:AVERAGE \
DEF:cat_fs=${RRDDIR}/gauge-flow_category_file_sharing_count.rrd:value:AVERAGE \
DEF:cat_game=${RRDDIR}/gauge-flow_category_game_count.rrd:value:AVERAGE \
DEF:cat_gamb=${RRDDIR}/gauge-flow_category_gambling_count.rrd:value:AVERAGE \
DEF:cat_iot=${RRDDIR}/gauge-flow_category_iot_scada_count.rrd:value:AVERAGE \
DEF:cat_mal=${RRDDIR}/gauge-flow_category_malware_count.rrd:value:AVERAGE \
DEF:cat_med=${RRDDIR}/gauge-flow_category_media_count.rrd:value:AVERAGE \
DEF:cat_min=${RRDDIR}/gauge-flow_category_mining_count.rrd:value:AVERAGE \
@@ -196,17 +231,21 @@ rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
DEF:cat_str=${RRDDIR}/gauge-flow_category_streaming_count.rrd:value:AVERAGE \
DEF:cat_sys=${RRDDIR}/gauge-flow_category_system_count.rrd:value:AVERAGE \
DEF:cat_ukn=${RRDDIR}/gauge-flow_category_unknown_count.rrd:value:AVERAGE \
DEF:cat_uns=${RRDDIR}/gauge-flow_category_unspecified_count.rrd:value:AVERAGE \
DEF:cat_vid=${RRDDIR}/gauge-flow_category_video_count.rrd:value:AVERAGE \
DEF:cat_vrt=${RRDDIR}/gauge-flow_category_virt_assistant_count.rrd:value:AVERAGE \
DEF:cat_voip=${RRDDIR}/gauge-flow_category_voip_count.rrd:value:AVERAGE \
DEF:cat_vpn=${RRDDIR}/gauge-flow_category_vpn_count.rrd:value:AVERAGE \
DEF:cat_web=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
DEF:cat_banned=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
DEF:cat_unavail=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
DEF:cat_allowed=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
DEF:cat_antimal=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
DEF:cat_crypto=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data cat_ads) \
AREA:cat_ads#f1c232:"Advertisment..........." \
DEF:cat_banned=${RRDDIR}/gauge-flow_category_banned_site_count.rrd:value:AVERAGE \
DEF:cat_unavail=${RRDDIR}/gauge-flow_category_site_unavail_count.rrd:value:AVERAGE \
DEF:cat_allowed=${RRDDIR}/gauge-flow_category_allowed_site_count.rrd:value:AVERAGE \
DEF:cat_antimal=${RRDDIR}/gauge-flow_category_antimalware_count.rrd:value:AVERAGE \
DEF:cat_crypto=${RRDDIR}/gauge-flow_category_crypto_currency_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data cat_adlt) \
AREA:cat_adlt#f0c032:"Adult.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_adlt) \
STACK:cat_ads#f1c232:"Advertisment..........." \
$(rrdtool_graph_print_cur_min_max_avg cat_ads) \
STACK:cat_chat#6fa8dc:"Chat..................." \
$(rrdtool_graph_print_cur_min_max_avg cat_chat) \
@@ -214,6 +253,10 @@ rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
$(rrdtool_graph_print_cur_min_max_avg cat_cloud) \
STACK:cat_collab#3212aa:"Collaborative.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_collab) \
STACK:cat_conn#22aa11:"Connection-Check......." \
$(rrdtool_graph_print_cur_min_max_avg cat_conn) \
STACK:cat_cybr#00ff00:"Cybersecurity.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_cybr) \
STACK:cat_xfer#16537e:"Data-Transfer.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_xfer) \
STACK:cat_db#cc0000:"Database..............." \
@@ -226,6 +269,10 @@ rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
$(rrdtool_graph_print_cur_min_max_avg cat_fs) \
STACK:cat_game#00ff26:"Game..................." \
$(rrdtool_graph_print_cur_min_max_avg cat_game) \
STACK:cat_gamb#aa0026:"Gambling..............." \
$(rrdtool_graph_print_cur_min_max_avg cat_gamb) \
STACK:cat_iot#227867:"IoT-Scada.............." \
$(rrdtool_graph_print_cur_min_max_avg cat_iot) \
STACK:cat_mal#f44336:"Malware................" \
$(rrdtool_graph_print_cur_min_max_avg cat_mal) \
STACK:cat_med#ff8300:"Media.................." \
@@ -254,53 +301,56 @@ rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
$(rrdtool_graph_print_cur_min_max_avg cat_sys) \
STACK:cat_ukn#999999:"Unknown................" \
$(rrdtool_graph_print_cur_min_max_avg cat_ukn) \
STACK:cat_uns#999999:"Unspecified............" \
$(rrdtool_graph_print_cur_min_max_avg cat_uns) \
STACK:cat_vid#518820:"Video.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_vid) \
STACK:cat_vrt#216820:"Virtual-Assistant......" \
$(rrdtool_graph_print_cur_min_max_avg cat_vrt) \
STACK:cat_voip#ffc700:"Voice-Over-IP.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_voip) \
STACK:cat_vpn#378035:"Virtual-Private-Network" \
$(rrdtool_graph_print_cur_min_max_avg cat_vpn) \
STACK:cat_web#00fffb:"Web...................." \
$(rrdtool_graph_print_cur_min_max_avg cat_web) \
STACK:cat_banned#ff1010:"Banned Sites..........." \
STACK:cat_banned#ff1010:"Banned-Sites..........." \
$(rrdtool_graph_print_cur_min_max_avg cat_banned) \
STACK:cat_unavail#ff1010:"Sites Unavailable......" \
STACK:cat_unavail#ff1010:"Sites-Unavailable......" \
$(rrdtool_graph_print_cur_min_max_avg cat_unavail) \
STACK:cat_allowed#ff1010:"Allowed Sites.........." \
STACK:cat_allowed#ff1010:"Allowed-Sites.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_allowed) \
STACK:cat_antimal#ff1010:"Antimalware............" \
$(rrdtool_graph_print_cur_min_max_avg cat_antimal) \
STACK:cat_crypto#afaf00:"Crypto Currency........" \
STACK:cat_crypto#afaf00:"Crypto-Currency........" \
$(rrdtool_graph_print_cur_min_max_avg cat_crypto)
rrdtool_graph JSON 'Lines' "${OUTDIR}/json_lines" \
DEF:json_lines=${RRDDIR}/gauge-json_lines.rrd:value:AVERAGE \
DEF:json_lines=${RRDDIR}/counter-json_lines.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data json_lines) \
AREA:json_lines#4dff4d::STACK \
LINE2:json_lines#00e600:"JSON-lines" \
$(rrdtool_graph_print_cur_min_max_avg json_lines)
rrdtool_graph JSON 'Bytes' "${OUTDIR}/json_bytes" \
DEF:json_bytes=${RRDDIR}/gauge-json_bytes.rrd:value:AVERAGE \
DEF:json_bytes=${RRDDIR}/counter-json_bytes.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data json_bytes) \
AREA:json_bytes#4dff4d::STACK \
LINE2:json_bytes#00e600:"JSON-bytes" \
$(rrdtool_graph_print_cur_min_max_avg json_bytes)
rrdtool_graph Events 'Amouunt' "${OUTDIR}/events" \
DEF:init=${RRDDIR}/gauge-init_count.rrd:value:AVERAGE \
DEF:reconnect=${RRDDIR}/gauge-reconnect_count.rrd:value:AVERAGE \
DEF:shutdown=${RRDDIR}/gauge-shutdown_count.rrd:value:AVERAGE \
DEF:status=${RRDDIR}/gauge-status_count.rrd:value:AVERAGE \
DEF:packet=${RRDDIR}/gauge-packet_count.rrd:value:AVERAGE \
DEF:packet_flow=${RRDDIR}/gauge-packet_flow_count.rrd:value:AVERAGE \
DEF:new=${RRDDIR}/gauge-flow_new_count.rrd:value:AVERAGE \
DEF:end=${RRDDIR}/gauge-flow_end_count.rrd:value:AVERAGE \
DEF:idle=${RRDDIR}/gauge-flow_idle_count.rrd:value:AVERAGE \
DEF:update=${RRDDIR}/gauge-flow_update_count.rrd:value:AVERAGE \
DEF:detection_update=${RRDDIR}/gauge-flow_detection_update_count.rrd:value:AVERAGE \
DEF:guessed=${RRDDIR}/gauge-flow_guessed_count.rrd:value:AVERAGE \
DEF:detected=${RRDDIR}/gauge-flow_detected_count.rrd:value:AVERAGE \
DEF:not_detected=${RRDDIR}/gauge-flow_not_detected_count.rrd:value:AVERAGE \
DEF:analyse=${RRDDIR}/gauge-flow_analyse_count.rrd:value:AVERAGE \
DEF:error=${RRDDIR}/gauge-error_count_sum.rrd:value:AVERAGE \
rrdtool_graph Events 'Amount' "${OUTDIR}/events" \
DEF:init=${RRDDIR}/counter-init_count.rrd:value:AVERAGE \
DEF:reconnect=${RRDDIR}/counter-reconnect_count.rrd:value:AVERAGE \
DEF:shutdown=${RRDDIR}/counter-shutdown_count.rrd:value:AVERAGE \
DEF:status=${RRDDIR}/counter-status_count.rrd:value:AVERAGE \
DEF:packet=${RRDDIR}/counter-packet_count.rrd:value:AVERAGE \
DEF:packet_flow=${RRDDIR}/counter-packet_flow_count.rrd:value:AVERAGE \
DEF:new=${RRDDIR}/counter-flow_new_count.rrd:value:AVERAGE \
DEF:ewd=${RRDDIR}/counter-flow_end_count.rrd:value:AVERAGE \
DEF:idle=${RRDDIR}/counter-flow_idle_count.rrd:value:AVERAGE \
DEF:update=${RRDDIR}/counter-flow_update_count.rrd:value:AVERAGE \
DEF:detection_update=${RRDDIR}/counter-flow_detection_update_count.rrd:value:AVERAGE \
DEF:guessed=${RRDDIR}/counter-flow_guessed_count.rrd:value:AVERAGE \
DEF:detected=${RRDDIR}/counter-flow_detected_count.rrd:value:AVERAGE \
DEF:not_detected=${RRDDIR}/counter-flow_not_detected_count.rrd:value:AVERAGE \
DEF:analyse=${RRDDIR}/counter-flow_analyse_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data init) \
AREA:init#f1c232:"Init..................." \
$(rrdtool_graph_print_cur_min_max_avg init) \
@@ -316,8 +366,8 @@ rrdtool_graph Events 'Amouunt' "${OUTDIR}/events" \
$(rrdtool_graph_print_cur_min_max_avg packet_flow) \
STACK:new#c76700:"New...................." \
$(rrdtool_graph_print_cur_min_max_avg new) \
STACK:end#c78500:"End...................." \
$(rrdtool_graph_print_cur_min_max_avg end) \
STACK:ewd#c78500:"End...................." \
$(rrdtool_graph_print_cur_min_max_avg ewd) \
STACK:idle#c7a900:"Idle..................." \
$(rrdtool_graph_print_cur_min_max_avg idle) \
STACK:update#c7c400:"Updates................" \
@@ -331,28 +381,25 @@ rrdtool_graph Events 'Amouunt' "${OUTDIR}/events" \
STACK:not_detected#00bdc7:"Not-Detected..........." \
$(rrdtool_graph_print_cur_min_max_avg not_detected) \
STACK:analyse#1400c7:"Analyse................" \
$(rrdtool_graph_print_cur_min_max_avg analyse) \
STACK:error#c70000:"Error.................." \
$(rrdtool_graph_print_cur_min_max_avg error)
rrdtool_graph Error-Events 'Amouunt' "${OUTDIR}/error_events" \
DEF:error_0=${RRDDIR}/gauge-error_0_count.rrd:value:AVERAGE \
DEF:error_1=${RRDDIR}/gauge-error_1_count.rrd:value:AVERAGE \
DEF:error_2=${RRDDIR}/gauge-error_2_count.rrd:value:AVERAGE \
DEF:error_3=${RRDDIR}/gauge-error_3_count.rrd:value:AVERAGE \
DEF:error_4=${RRDDIR}/gauge-error_4_count.rrd:value:AVERAGE \
DEF:error_5=${RRDDIR}/gauge-error_5_count.rrd:value:AVERAGE \
DEF:error_6=${RRDDIR}/gauge-error_6_count.rrd:value:AVERAGE \
DEF:error_7=${RRDDIR}/gauge-error_7_count.rrd:value:AVERAGE \
DEF:error_8=${RRDDIR}/gauge-error_8_count.rrd:value:AVERAGE \
DEF:error_9=${RRDDIR}/gauge-error_9_count.rrd:value:AVERAGE \
DEF:error_10=${RRDDIR}/gauge-error_10_count.rrd:value:AVERAGE \
DEF:error_11=${RRDDIR}/gauge-error_11_count.rrd:value:AVERAGE \
DEF:error_12=${RRDDIR}/gauge-error_12_count.rrd:value:AVERAGE \
DEF:error_13=${RRDDIR}/gauge-error_13_count.rrd:value:AVERAGE \
DEF:error_14=${RRDDIR}/gauge-error_14_count.rrd:value:AVERAGE \
DEF:error_15=${RRDDIR}/gauge-error_15_count.rrd:value:AVERAGE \
DEF:error_16=${RRDDIR}/gauge-error_16_count.rrd:value:AVERAGE \
DEF:error_unknown=${RRDDIR}/gauge-error_unknown_count.rrd:value:AVERAGE \
$(rrdtool_graph_print_cur_min_max_avg analyse)
rrdtool_graph Error-Events 'Amount' "${OUTDIR}/error_events" \
DEF:error_0=${RRDDIR}/counter-error_unknown_datalink.rrd:value:AVERAGE \
DEF:error_1=${RRDDIR}/counter-error_unknown_l3_protocol.rrd:value:AVERAGE \
DEF:error_2=${RRDDIR}/counter-error_unsupported_datalink.rrd:value:AVERAGE \
DEF:error_3=${RRDDIR}/counter-error_packet_too_short.rrd:value:AVERAGE \
DEF:error_4=${RRDDIR}/counter-error_packet_type_unknown.rrd:value:AVERAGE \
DEF:error_5=${RRDDIR}/counter-error_packet_header_invalid.rrd:value:AVERAGE \
DEF:error_6=${RRDDIR}/counter-error_ip4_packet_too_short.rrd:value:AVERAGE \
DEF:error_7=${RRDDIR}/counter-error_ip4_size_smaller_than_header.rrd:value:AVERAGE \
DEF:error_8=${RRDDIR}/counter-error_ip4_l4_payload_detection.rrd:value:AVERAGE \
DEF:error_9=${RRDDIR}/counter-error_ip6_packet_too_short.rrd:value:AVERAGE \
DEF:error_10=${RRDDIR}/counter-error_ip6_size_smaller_than_header.rrd:value:AVERAGE \
DEF:error_11=${RRDDIR}/counter-error_ip6_l4_payload_detection.rrd:value:AVERAGE \
DEF:error_12=${RRDDIR}/counter-error_tcp_packet_too_short.rrd:value:AVERAGE \
DEF:error_13=${RRDDIR}/counter-error_udp_packet_too_short.rrd:value:AVERAGE \
DEF:error_14=${RRDDIR}/counter-error_capture_size_smaller_than_packet.rrd:value:AVERAGE \
DEF:error_15=${RRDDIR}/counter-error_max_flows_to_track.rrd:value:AVERAGE \
DEF:error_16=${RRDDIR}/counter-error_flow_memory_alloc.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data error_0) \
AREA:error_0#ff6a00:"Unknown-datalink-layer-packet............................" \
$(rrdtool_graph_print_cur_min_max_avg error_0) \
@@ -387,10 +434,38 @@ rrdtool_graph Error-Events 'Amouunt' "${OUTDIR}/error_events" \
STACK:error_15#4095bf:"Max-flows-to-track-reached..............................." \
$(rrdtool_graph_print_cur_min_max_avg error_15) \
STACK:error_16#0040ff:"Flow-memory-allocation-failed............................" \
$(rrdtool_graph_print_cur_min_max_avg error_16) \
STACK:error_unknown#4060bf:"Unknown-error............................................" \
$(rrdtool_graph_print_cur_min_max_avg error_unknown)
rrdtool_graph Risky-Events 'Amouunt' "${OUTDIR}/risky_events" \
$(rrdtool_graph_print_cur_min_max_avg error_16)
rrdtool_graph Risk-Severites Amount "${OUTDIR}/severities" \
DEF:sever_crit=${RRDDIR}/gauge-flow_severity_critical.rrd:value:AVERAGE \
DEF:sever_emer=${RRDDIR}/gauge-flow_severity_emergency.rrd:value:AVERAGE \
DEF:sever_high=${RRDDIR}/gauge-flow_severity_high.rrd:value:AVERAGE \
DEF:sever_low=${RRDDIR}/gauge-flow_severity_low.rrd:value:AVERAGE \
DEF:sever_med=${RRDDIR}/gauge-flow_severity_medium.rrd:value:AVERAGE \
DEF:sever_sev=${RRDDIR}/gauge-flow_severity_severe.rrd:value:AVERAGE \
DEF:sever_ukn=${RRDDIR}/gauge-flow_severity_unknown.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data sever_crit) \
AREA:sever_crit#e68e00::STACK \
AREA:sever_emer#e63200::STACK \
AREA:sever_high#e6c700::STACK \
AREA:sever_low#00e600::STACK \
AREA:sever_med#8fce00::STACK \
AREA:sever_sev#c61b2b::STACK \
AREA:sever_ukn#7e8877::STACK \
LINE2:sever_crit#e68e00:"Critical............." \
$(rrdtool_graph_print_cur_min_max_avg sever_crit) \
LINE2:sever_emer#e63200:"Emergency............" \
$(rrdtool_graph_print_cur_min_max_avg sever_emer) \
LINE2:sever_high#e6c700:"High................." \
$(rrdtool_graph_print_cur_min_max_avg sever_high) \
LINE2:sever_low#00e600:"Low.................." \
$(rrdtool_graph_print_cur_min_max_avg sever_low) \
LINE2:sever_med#8fce00:"Medium..............." \
$(rrdtool_graph_print_cur_min_max_avg sever_med) \
LINE2:sever_sev#c61b2b:"Severe..............." \
$(rrdtool_graph_print_cur_min_max_avg sever_sev) \
LINE2:sever_ukn#7e8877:"Unknown.............." \
$(rrdtool_graph_print_cur_min_max_avg sever_ukn)
rrdtool_graph Risks 'Amount' "${OUTDIR}/risky_events" \
DEF:risk_1=${RRDDIR}/gauge-flow_risk_1_count.rrd:value:AVERAGE \
DEF:risk_2=${RRDDIR}/gauge-flow_risk_2_count.rrd:value:AVERAGE \
DEF:risk_3=${RRDDIR}/gauge-flow_risk_3_count.rrd:value:AVERAGE \
@@ -440,6 +515,10 @@ rrdtool_graph Risky-Events 'Amouunt' "${OUTDIR}/risky_events" \
DEF:risk_47=${RRDDIR}/gauge-flow_risk_47_count.rrd:value:AVERAGE \
DEF:risk_48=${RRDDIR}/gauge-flow_risk_48_count.rrd:value:AVERAGE \
DEF:risk_49=${RRDDIR}/gauge-flow_risk_49_count.rrd:value:AVERAGE \
DEF:risk_50=${RRDDIR}/gauge-flow_risk_50_count.rrd:value:AVERAGE \
DEF:risk_51=${RRDDIR}/gauge-flow_risk_51_count.rrd:value:AVERAGE \
DEF:risk_52=${RRDDIR}/gauge-flow_risk_52_count.rrd:value:AVERAGE \
DEF:risk_53=${RRDDIR}/gauge-flow_risk_53_count.rrd:value:AVERAGE \
DEF:risk_unknown=${RRDDIR}/gauge-flow_risk_unknown_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data risk_1) \
AREA:risk_1#ff0000:"XSS-Attack..............................................." \
@@ -540,5 +619,13 @@ rrdtool_graph Risky-Events 'Amouunt' "${OUTDIR}/risky_events" \
$(rrdtool_graph_print_cur_min_max_avg risk_48) \
STACK:risk_49#dfffdf:"Minor-Issues............................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_49) \
STACK:risk_50#ef20df:"TCP-Connection-Issues...................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_50) \
STACK:risk_51#ef60df:"Fully-Encrypted.........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_51) \
STACK:risk_52#efa0df:"Invalid-ALPN/SNI-combination............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_52) \
STACK:risk_53#efffdf:"Malware-Host-Contacted..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_53) \
STACK:risk_unknown#df2060:"Unknown.................................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_unknown)

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
@@ -146,6 +158,25 @@
<img src="detections_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
@@ -165,25 +177,6 @@
<img src="error_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
@@ -165,6 +177,25 @@
<img src="detections_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="confidence_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
@@ -336,6 +367,25 @@
<img src="json_bytes_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">

View File

@@ -97,6 +97,18 @@
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">

View File

@@ -0,0 +1,198 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="risks.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Risks
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="severities_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,135 +0,0 @@
#include <arpa/inet.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#include "config.h"
#include "jsmn.h"
static char serv_listen_addr[INET_ADDRSTRLEN] = DISTRIBUTOR_HOST;
static uint16_t serv_listen_port = DISTRIBUTOR_PORT;
int main(void)
{
int sockfd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in remote_addr = {};
socklen_t remote_addrlen = sizeof(remote_addr);
uint8_t buf[NETWORK_BUFFER_MAX_SIZE];
size_t buf_used = 0;
size_t json_start = 0;
unsigned long long int json_bytes = 0;
jsmn_parser parser;
jsmntok_t tokens[128];
if (sockfd < 0)
{
perror("socket");
return 1;
}
remote_addr.sin_family = AF_INET;
if (inet_pton(AF_INET, &serv_listen_addr[0], &remote_addr.sin_addr) != 1)
{
perror("inet_pton");
return 1;
}
remote_addr.sin_port = htons(serv_listen_port);
if (connect(sockfd, (struct sockaddr *)&remote_addr, remote_addrlen) != 0)
{
perror("connect");
return 1;
}
while (1)
{
errno = 0;
ssize_t bytes_read = read(sockfd, buf + buf_used, sizeof(buf) - buf_used);
if (bytes_read <= 0 || errno != 0)
{
fprintf(stderr, "Remote end disconnected.\n");
break;
}
buf_used += bytes_read;
while (buf_used >= NETWORK_BUFFER_LENGTH_DIGITS + 1)
{
if (buf[NETWORK_BUFFER_LENGTH_DIGITS] != '{')
{
fprintf(stderr, "BUG: JSON invalid opening character: '%c'\n", buf[NETWORK_BUFFER_LENGTH_DIGITS]);
exit(1);
}
char * json_str_start = NULL;
json_bytes = strtoull((char *)buf, &json_str_start, 10);
json_bytes += (uint8_t *)json_str_start - buf;
json_start = (uint8_t *)json_str_start - buf;
if (errno == ERANGE)
{
fprintf(stderr, "BUG: Size of JSON exceeds limit\n");
exit(1);
}
if ((uint8_t *)json_str_start == buf)
{
fprintf(stderr, "BUG: Missing size before JSON string: \"%.*s\"\n", NETWORK_BUFFER_LENGTH_DIGITS, buf);
exit(1);
}
if (json_bytes > sizeof(buf))
{
fprintf(stderr, "BUG: JSON string too big: %llu > %zu\n", json_bytes, sizeof(buf));
exit(1);
}
if (json_bytes > buf_used)
{
break;
}
if (buf[json_bytes - 2] != '}' || buf[json_bytes - 1] != '\n')
{
fprintf(stderr, "BUG: Invalid JSON string: \"%.*s\"\n", (int)json_bytes, buf);
exit(1);
}
int r;
jsmn_init(&parser);
r = jsmn_parse(&parser,
(char *)(buf + json_start),
json_bytes - json_start,
tokens,
sizeof(tokens) / sizeof(tokens[0]));
if (r < 1 || tokens[0].type != JSMN_OBJECT)
{
fprintf(stderr, "JSON parsing failed with return value %d at position %u\n", r, parser.pos);
fprintf(stderr, "JSON string: '%.*s'\n", (int)(json_bytes - json_start), (char *)(buf + json_start));
exit(1);
}
for (int i = 1; i < r; i++)
{
if (i % 2 == 1)
{
#ifdef JSMN_PARENT_LINKS
printf("[%d][%d]", i, tokens[i].parent);
#endif
printf("[%.*s : ", tokens[i].end - tokens[i].start, (char *)(buf + json_start) + tokens[i].start);
}
else
{
printf("%.*s] ", tokens[i].end - tokens[i].start, (char *)(buf + json_start) + tokens[i].start);
}
}
printf("EoF\n");
memmove(buf, buf + json_bytes, buf_used - json_bytes);
buf_used -= json_bytes;
}
}
return 0;
}

View File

@@ -1,7 +1,6 @@
#include <dbus-1.0/dbus/dbus.h>
#include <signal.h>
#include <stdint.h>
#include <syslog.h>
#include "nDPIsrvd.h"
#include "utstring.h"
@@ -23,7 +22,7 @@ enum dbus_level
static char const * const flow_severities[] = {"Low", "Medium", "High", "Severe", "Critical", "Emergency"};
static char const * const flow_breeds[] = {
"Safe", "Acceptable", "Fun", "Unsafe", "Potentially Dangerous", "Tracker\\/Ads", "Dangerous", "Unrated", "???"};
"Safe", "Acceptable", "Fun", "Unsafe", "Potentially_Dangerous", "Tracker_Ads", "Dangerous", "Unrated", "???"};
static char const * const flow_categories[] = {"Unspecified",
"Media",
"VPN",
@@ -61,7 +60,59 @@ static char const * const flow_categories[] = {"Unspecified",
"Site_Unavailable",
"Allowed_Site",
"Antimalware",
"Crypto_Currency"};
"Crypto_Currency",
"Gambling",
"Health",
"ArtifIntelligence",
"Finance",
"News",
"Sport",
"Business",
"Internet",
"Blockchain_Crypto",
"Blog_Forum",
"Government",
"Education",
"CDN_Proxy",
"Hw_Sw",
"Dating",
"Travel",
"Food",
"Bots",
"Scanners",
"Hosting",
"Art",
"Fashion",
"Books",
"Science",
"Maps_Navigation",
"Login_Portal",
"Legal",
"Environmental_Services",
"Culture",
"Housing",
"Telecommunication",
"Transportation",
"Design",
"Employment",
"Events",
"Weather",
"Lifestyle",
"Real_Estate",
"Security",
"Environment",
"Hobby",
"Computer_Science",
"Construction",
"Engineering",
"Religion",
"Entertainment",
"Agriculture",
"Technology",
"Beauty",
"History",
"Politics",
"Vehicles"};
static uint8_t desired_flow_severities[nDPIsrvd_ARRAY_LENGTH(flow_severities)] = {};
static uint8_t desired_flow_breeds[nDPIsrvd_ARRAY_LENGTH(flow_breeds)] = {};
@@ -75,6 +126,29 @@ static int main_thread_shutdown = 0;
static char * pidfile = NULL;
static char * serv_optarg = NULL;
#ifdef ENABLE_MEMORY_PROFILING
void nDPIsrvd_memprof_log_alloc(size_t alloc_size)
{
(void)alloc_size;
}
void nDPIsrvd_memprof_log_free(size_t free_size)
{
(void)free_size;
}
void nDPIsrvd_memprof_log(char const * const format, ...)
{
va_list ap;
va_start(ap, format);
fprintf(stderr, "%s", "nDPIsrvd MemoryProfiler: ");
vfprintf(stderr, format, ap);
fprintf(stderr, "%s\n", "");
va_end(ap);
}
#endif
static void send_to_dbus(char const * const icon,
char const * const urgency,
enum dbus_level level,
@@ -162,8 +236,7 @@ static void check_value(char const * const possible_values[],
{
if (get_value_index(possible_values, possible_values_size, needle, needle_len) == -1)
{
syslog(LOG_DAEMON | LOG_ERR, "BUG: Unknown value: %.*s", (int)needle_len, needle);
notifyf(DBUS_CRITICAL, "BUG", 5000, "Unknown value: %.*s", (int)needle_len, needle);
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 5000, "BUG: Unknown value: %.*s", (int)needle_len, needle);
}
}
@@ -379,7 +452,7 @@ static void print_usage(char const * const arg0)
static int set_defaults(void)
{
char const * const default_severities[] = {"High", "Severe", "Critical", "Emergency"};
char const * const default_breeds[] = {"Unsafe", "Potentially Dangerous", "Dangerous", "Unrated"};
char const * const default_breeds[] = {"Unsafe", "Potentially_Dangerous", "Dangerous", "Unrated"};
char const * const default_categories[] = {"Mining", "Malware", "Banned_Site", "Crypto_Currency"};
for (size_t i = 0; i < nDPIsrvd_ARRAY_LENGTH(default_severities); ++i)
@@ -428,7 +501,7 @@ static int parse_options(int argc, char ** argv, struct nDPIsrvd_socket * const
{
int opt, force_defaults = 1;
while ((opt = getopt(argc, argv, "hdp:s:C:B:S:")) != -1)
while ((opt = getopt(argc, argv, "hldp:s:C:B:S:")) != -1)
{
switch (opt)
{
@@ -499,8 +572,7 @@ static int parse_options(int argc, char ** argv, struct nDPIsrvd_socket * const
if (force_defaults != 0 && set_defaults() != 0)
{
fprintf(stderr, "%s\n", "BUG: Could not set default values.");
syslog(LOG_DAEMON | LOG_ERR, "%s\n", "BUG: Could not set default values.");
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 5000, "%s\n", "BUG: Could not set default values.");
return 1;
}
@@ -511,13 +583,13 @@ static int parse_options(int argc, char ** argv, struct nDPIsrvd_socket * const
if (nDPIsrvd_setup_address(&sock->address, serv_optarg) != 0)
{
syslog(LOG_DAEMON | LOG_ERR, "Could not parse address `%s'", serv_optarg);
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "Could not parse address `%s'\n", serv_optarg);
return 1;
}
if (optind < argc)
{
syslog(LOG_DAEMON | LOG_ERR, "%s", "Unexpected argument after options");
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "%s\n", "Unexpected argument after options");
return 1;
}
@@ -548,13 +620,11 @@ int main(int argc, char ** argv)
signal(SIGTERM, sighandler);
signal(SIGPIPE, SIG_IGN);
openlog("nDPIsrvd-notifyd", LOG_CONS, LOG_DAEMON);
struct nDPIsrvd_socket * sock =
nDPIsrvd_socket_init(0, 0, 0, sizeof(struct flow_user_data), notifyd_json_callback, NULL, NULL);
if (sock == NULL)
{
syslog(LOG_DAEMON | LOG_ERR, "%s", "nDPIsrvd socket memory allocation failed!");
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 5000, "%s\n", "BUG: nDPIsrvd socket memory allocation failed!");
return 1;
}
@@ -579,8 +649,8 @@ int main(int argc, char ** argv)
}
if (previous_connect_succeeded != 0)
{
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "nDPIsrvd socket connect to %s failed!", serv_optarg);
syslog(LOG_DAEMON | LOG_ERR, "nDPIsrvd socket connect to %s failed!", serv_optarg);
notifyf(
DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "nDPIsrvd socket connect to %s failed!\n", serv_optarg);
previous_connect_succeeded = 0;
}
nDPIsrvd_socket_close(sock);
@@ -591,12 +661,11 @@ int main(int argc, char ** argv)
if (nDPIsrvd_set_read_timeout(sock, 3, 0) != 0)
{
syslog(LOG_DAEMON | LOG_ERR, "nDPIsrvd set read timeout failed: %s", strerror(errno));
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "nDPIsrvd set read timeout failed: %s\n", strerror(errno));
goto failure;
}
notifyf(DBUS_NORMAL, "nDPIsrvd-notifyd", 3000, "Connected to '%s'", serv_optarg);
syslog(LOG_DAEMON | LOG_NOTICE, "%s", "Initialization succeeded.");
notifyf(DBUS_NORMAL, "nDPIsrvd-notifyd", 3000, "Connected to '%s'\n", serv_optarg);
while (main_thread_shutdown == 0)
{
@@ -611,25 +680,26 @@ int main(int argc, char ** argv)
}
if (read_ret != READ_OK)
{
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "nDPIsrvd socket read from %s failed!", serv_optarg);
syslog(LOG_DAEMON | LOG_ERR, "nDPIsrvd socket read from %s failed!", serv_optarg);
notifyf(DBUS_CRITICAL, "nDPIsrvd-notifyd", 3000, "nDPIsrvd socket read from %s failed!\n", serv_optarg);
break;
}
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(sock);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
syslog(LOG_DAEMON | LOG_ERR,
"Could not parse json string %s: %.*s\n",
nDPIsrvd_enum_to_string(parse_ret),
nDPIsrvd_json_buffer_length(sock),
nDPIsrvd_json_buffer_string(sock));
notifyf(DBUS_CRITICAL,
"nDPIsrvd-notifyd",
3000,
"Could not parse JSON message %s: %.*s\n",
nDPIsrvd_enum_to_string(parse_ret),
nDPIsrvd_json_buffer_length(sock),
nDPIsrvd_json_buffer_string(sock));
break;
}
}
nDPIsrvd_socket_close(sock);
notifyf(DBUS_NORMAL, "nDPIsrvd-notifyd", 3000, "Disconnected from '%s'.", serv_optarg);
notifyf(DBUS_NORMAL, "nDPIsrvd-notifyd", 3000, "Disconnected from '%s'.\n", serv_optarg);
if (main_thread_shutdown == 0)
{
sleep(SLEEP_TIME_IN_S);
@@ -639,7 +709,6 @@ int main(int argc, char ** argv)
failure:
nDPIsrvd_socket_free(&sock);
daemonize_shutdown(pidfile);
closelog();
return 0;
}

View File

@@ -253,7 +253,7 @@ int main(int argc, char ** argv)
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(sock);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
printf("Could not parse json string %s: %.*s\n",
printf("Could not parse JSON message %s: %.*s\n",
nDPIsrvd_enum_to_string(parse_ret),
nDPIsrvd_json_buffer_length(sock),
nDPIsrvd_json_buffer_string(sock));

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -1,3 +0,0 @@
body {
background: black;
}

View File

@@ -1,300 +0,0 @@
#!/usr/bin/env python3
import multiprocessing
import os
import sys
import time
sys.path.append(os.path.dirname(sys.argv[0]) + '/../../dependencies')
sys.path.append(os.path.dirname(sys.argv[0]) + '/../share/nDPId')
sys.path.append(os.path.dirname(sys.argv[0]))
sys.path.append(sys.base_prefix + '/share/nDPId')
import nDPIsrvd
from nDPIsrvd import nDPIsrvdSocket
import plotly_dash
FLOW_RISK_SEVERE = 4
FLOW_RISK_HIGH = 3
FLOW_RISK_MEDIUM = 2
FLOW_RISK_LOW = 1
def nDPIsrvd_worker_onFlowCleanup(instance, current_flow, global_user_data):
_, shared_flow_dict = global_user_data
flow_key = current_flow.flow_key
shared_flow_dict['current-flows'] -= 1
if flow_key not in shared_flow_dict:
return True
shared_flow_dict['total-l4-bytes'] += shared_flow_dict[flow_key]['total-l4-bytes']
if shared_flow_dict[flow_key]['is_detected'] is True:
shared_flow_dict['current-detected-flows'] -= 1
if shared_flow_dict[flow_key]['is_guessed'] is True:
shared_flow_dict['current-guessed-flows'] -= 1
if shared_flow_dict[flow_key]['is_not_detected'] is True:
shared_flow_dict['current-not-detected-flows'] -= 1
if shared_flow_dict[flow_key]['is_midstream'] is True:
shared_flow_dict['current-midstream-flows'] -= 1
if shared_flow_dict[flow_key]['is_risky'] > 0:
shared_flow_dict['current-risky-flows'] -= 1
if shared_flow_dict[flow_key]['is_risky'] == FLOW_RISK_LOW:
shared_flow_dict['current-risky-flows-low'] -= 1
elif shared_flow_dict[flow_key]['is_risky'] == FLOW_RISK_MEDIUM:
shared_flow_dict['current-risky-flows-medium'] -= 1
elif shared_flow_dict[flow_key]['is_risky'] == FLOW_RISK_HIGH:
shared_flow_dict['current-risky-flows-high'] -= 1
elif shared_flow_dict[flow_key]['is_risky'] == FLOW_RISK_SEVERE:
shared_flow_dict['current-risky-flows-severe'] -= 1
del shared_flow_dict[current_flow.flow_key]
return True
def nDPIsrvd_worker_onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
nsock, shared_flow_dict = global_user_data
shared_flow_dict['total-events'] += 1
shared_flow_dict['total-json-bytes'] = nsock.received_bytes
if 'error_event_name' in json_dict:
shared_flow_dict['total-base-events'] += 1
if 'daemon_event_name' in json_dict:
shared_flow_dict['total-daemon-events'] += 1
if 'packet_event_name' in json_dict and \
(json_dict['packet_event_name'] == 'packet' or \
json_dict['packet_event_name'] == 'packet-flow'):
shared_flow_dict['total-packet-events'] += 1
if 'flow_id' not in json_dict:
return True
else:
flow_key = json_dict['alias'] + '-' + json_dict['source'] + '-' + str(json_dict['flow_id'])
if flow_key not in shared_flow_dict:
current_flow.flow_key = flow_key
shared_flow_dict[flow_key] = mgr.dict()
shared_flow_dict[flow_key]['is_detected'] = False
shared_flow_dict[flow_key]['is_guessed'] = False
shared_flow_dict[flow_key]['is_not_detected'] = False
shared_flow_dict[flow_key]['is_midstream'] = False
shared_flow_dict[flow_key]['is_risky'] = 0
shared_flow_dict[flow_key]['total-l4-bytes'] = 0
shared_flow_dict[flow_key]['json'] = mgr.dict()
shared_flow_dict['total-flows'] += 1
shared_flow_dict['current-flows'] += 1
if current_flow.flow_key != flow_key:
return False
if 'flow_src_tot_l4_payload_len' in json_dict and 'flow_dst_tot_l4_payload_len' in json_dict:
shared_flow_dict[flow_key]['total-l4-bytes'] = json_dict['flow_src_tot_l4_payload_len'] + \
json_dict['flow_dst_tot_l4_payload_len']
if 'midstream' in json_dict and json_dict['midstream'] != 0:
if shared_flow_dict[flow_key]['is_midstream'] is False:
shared_flow_dict['total-midstream-flows'] += 1
shared_flow_dict['current-midstream-flows'] += 1
shared_flow_dict[flow_key]['is_midstream'] = True
if 'ndpi' in json_dict:
shared_flow_dict[flow_key]['json']['ndpi'] = json_dict['ndpi']
if 'flow_risk' in json_dict['ndpi']:
if shared_flow_dict[flow_key]['is_risky'] == 0:
shared_flow_dict['total-risky-flows'] += 1
shared_flow_dict['current-risky-flows'] += 1
severity = shared_flow_dict[flow_key]['is_risky']
if severity == FLOW_RISK_LOW:
shared_flow_dict['current-risky-flows-low'] -= 1
elif severity == FLOW_RISK_MEDIUM:
shared_flow_dict['current-risky-flows-medium'] -= 1
elif severity == FLOW_RISK_HIGH:
shared_flow_dict['current-risky-flows-high'] -= 1
elif severity == FLOW_RISK_SEVERE:
shared_flow_dict['current-risky-flows-severe'] -= 1
for key in json_dict['ndpi']['flow_risk']:
if json_dict['ndpi']['flow_risk'][key]['severity'] == 'Low':
severity = max(severity, FLOW_RISK_LOW)
elif json_dict['ndpi']['flow_risk'][key]['severity'] == 'Medium':
severity = max(severity, FLOW_RISK_MEDIUM)
elif json_dict['ndpi']['flow_risk'][key]['severity'] == 'High':
severity = max(severity, FLOW_RISK_HIGH)
elif json_dict['ndpi']['flow_risk'][key]['severity'] == 'Severe':
severity = max(severity, FLOW_RISK_SEVERE)
else:
raise RuntimeError('Invalid flow risk severity: {}'.format(
json_dict['ndpi']['flow_risk'][key]['severity']))
shared_flow_dict[flow_key]['is_risky'] = severity
if severity == FLOW_RISK_LOW:
shared_flow_dict['current-risky-flows-low'] += 1
elif severity == FLOW_RISK_MEDIUM:
shared_flow_dict['current-risky-flows-medium'] += 1
elif severity == FLOW_RISK_HIGH:
shared_flow_dict['current-risky-flows-high'] += 1
elif severity == FLOW_RISK_SEVERE:
shared_flow_dict['current-risky-flows-severe'] += 1
if 'flow_event_name' not in json_dict:
return True
if json_dict['flow_state'] == 'finished' and \
json_dict['ndpi']['proto'] != 'Unknown' and \
shared_flow_dict[flow_key]['is_detected'] is False:
shared_flow_dict['total-detected-flows'] += 1
shared_flow_dict['current-detected-flows'] += 1
shared_flow_dict[flow_key]['is_detected'] = True
if json_dict['flow_event_name'] == 'new':
shared_flow_dict['total-flow-new-events'] += 1
elif json_dict['flow_event_name'] == 'update':
shared_flow_dict['total-flow-update-events'] += 1
elif json_dict['flow_event_name'] == 'analyse':
shared_flow_dict['total-flow-analyse-events'] += 1
elif json_dict['flow_event_name'] == 'end':
shared_flow_dict['total-flow-end-events'] += 1
elif json_dict['flow_event_name'] == 'idle':
shared_flow_dict['total-flow-idle-events'] += 1
elif json_dict['flow_event_name'] == 'guessed':
shared_flow_dict['total-flow-guessed-events'] += 1
if shared_flow_dict[flow_key]['is_guessed'] is False:
shared_flow_dict['total-guessed-flows'] += 1
shared_flow_dict['current-guessed-flows'] += 1
shared_flow_dict[flow_key]['is_guessed'] = True
elif json_dict['flow_event_name'] == 'not-detected':
shared_flow_dict['total-flow-not-detected-events'] += 1
if shared_flow_dict[flow_key]['is_not_detected'] is False:
shared_flow_dict['total-not-detected-flows'] += 1
shared_flow_dict['current-not-detected-flows'] += 1
shared_flow_dict[flow_key]['is_not_detected'] = True
elif json_dict['flow_event_name'] == 'detected' or \
json_dict['flow_event_name'] == 'detection-update':
if json_dict['flow_event_name'] == 'detection-update':
shared_flow_dict['total-flow-detection-update-events'] += 1
else:
shared_flow_dict['total-flow-detected-events'] += 1
if shared_flow_dict[flow_key]['is_detected'] is False:
shared_flow_dict['total-detected-flows'] += 1
shared_flow_dict['current-detected-flows'] += 1
shared_flow_dict[flow_key]['is_detected'] = True
if shared_flow_dict[flow_key]['is_guessed'] is True:
shared_flow_dict['total-guessed-flows'] -= 1
shared_flow_dict['current-guessed-flows'] -= 1
shared_flow_dict[flow_key]['is_guessed'] = False
return True
def nDPIsrvd_worker(address, shared_flow_dict):
sys.stderr.write('Recv buffer size: {}\n'
.format(nDPIsrvd.NETWORK_BUFFER_MAX_SIZE))
sys.stderr.write('Connecting to {} ..\n'
.format(address[0]+':'+str(address[1])
if type(address) is tuple else address))
try:
while True:
try:
nsock = nDPIsrvdSocket()
nsock.connect(address)
nsock.loop(nDPIsrvd_worker_onJsonLineRecvd,
nDPIsrvd_worker_onFlowCleanup,
(nsock, shared_flow_dict))
except nDPIsrvd.SocketConnectionBroken:
sys.stderr.write('Lost connection to {} .. reconnecting\n'
.format(address[0]+':'+str(address[1])
if type(address) is tuple else address))
time.sleep(1.0)
except KeyboardInterrupt:
pass
if __name__ == '__main__':
argparser = nDPIsrvd.defaultArgumentParser()
argparser.add_argument('--listen-address', type=str, default='127.0.0.1', help='Plotly listen address')
argparser.add_argument('--listen-port', type=str, default=8050, help='Plotly listen port')
args = argparser.parse_args()
address = nDPIsrvd.validateAddress(args)
mgr = multiprocessing.Manager()
shared_flow_dict = mgr.dict()
shared_flow_dict['total-events'] = 0
shared_flow_dict['total-flow-new-events'] = 0
shared_flow_dict['total-flow-update-events'] = 0
shared_flow_dict['total-flow-analyse-events'] = 0
shared_flow_dict['total-flow-end-events'] = 0
shared_flow_dict['total-flow-idle-events'] = 0
shared_flow_dict['total-flow-detected-events'] = 0
shared_flow_dict['total-flow-detection-update-events'] = 0
shared_flow_dict['total-flow-guessed-events'] = 0
shared_flow_dict['total-flow-not-detected-events'] = 0
shared_flow_dict['total-packet-events'] = 0
shared_flow_dict['total-base-events'] = 0
shared_flow_dict['total-daemon-events'] = 0
shared_flow_dict['total-json-bytes'] = 0
shared_flow_dict['total-l4-bytes'] = 0
shared_flow_dict['total-flows'] = 0
shared_flow_dict['total-detected-flows'] = 0
shared_flow_dict['total-risky-flows'] = 0
shared_flow_dict['total-midstream-flows'] = 0
shared_flow_dict['total-guessed-flows'] = 0
shared_flow_dict['total-not-detected-flows'] = 0
shared_flow_dict['current-flows'] = 0
shared_flow_dict['current-detected-flows'] = 0
shared_flow_dict['current-midstream-flows'] = 0
shared_flow_dict['current-guessed-flows'] = 0
shared_flow_dict['current-not-detected-flows'] = 0
shared_flow_dict['current-risky-flows'] = 0
shared_flow_dict['current-risky-flows-severe'] = 0
shared_flow_dict['current-risky-flows-high'] = 0
shared_flow_dict['current-risky-flows-medium'] = 0
shared_flow_dict['current-risky-flows-low'] = 0
nDPIsrvd_job = multiprocessing.Process(target=nDPIsrvd_worker,
args=(address, shared_flow_dict))
nDPIsrvd_job.start()
web_job = multiprocessing.Process(target=plotly_dash.web_worker,
args=(shared_flow_dict, args.listen_address, args.listen_port))
web_job.start()
nDPIsrvd_job.join()
web_job.terminate()
web_job.join()

View File

@@ -1,415 +0,0 @@
import math
import dash
try:
from dash import dcc
except ImportError:
import dash_core_components as dcc
try:
from dash import html
except ImportError:
import dash_html_components as html
try:
from dash import dash_table as dt
except ImportError:
import dash_table as dt
from dash.dependencies import Input, Output, State
import dash_daq as daq
import plotly.graph_objects as go
global shared_flow_dict
app = dash.Dash(__name__)
def generate_box():
return {
'display': 'flex', 'flex-direction': 'row',
'background-color': '#082255'
}
def generate_led_display(div_id, label_name):
return daq.LEDDisplay(
id=div_id,
label={'label': label_name, 'style': {'color': '#C4CDD5'}},
labelPosition='bottom',
value='0',
backgroundColor='#082255',
color='#C4CDD5',
)
def generate_gauge(div_id, label_name, max_value=10):
return daq.Gauge(
id=div_id,
value=0,
label={'label': label_name, 'style': {'color': '#C4CDD5'}},
max=max_value,
min=0,
)
def build_gauge(key, max_value=100):
gauge_max = int(max(max_value,
shared_flow_dict[key]))
grad_green = [0, int(gauge_max * 1/3)]
grad_yellow = [int(gauge_max * 1/3), int(gauge_max * 2/3)]
grad_red = [int(gauge_max * 2/3), gauge_max]
grad_dict = {
"gradient":True,
"ranges":{
"green":grad_green,
"yellow":grad_yellow,
"red":grad_red
}
}
return shared_flow_dict[key], gauge_max, grad_dict
def build_piechart(labels, values, color_map=None):
lay = dict(
plot_bgcolor = '#082255',
paper_bgcolor = '#082255',
font={"color": "#fff"},
uirevision=True,
autosize=True,
height=250,
margin = {'autoexpand': True, 'b': 0, 'l': 0, 'r': 0, 't': 0, 'pad': 0},
width = 500,
uniformtext_minsize = 12,
uniformtext_mode = 'hide',
)
return go.Figure(layout=lay, data=[go.Pie(labels=labels, values=values, sort=False, marker_colors=color_map, textinfo='percent', textposition='inside')])
COLOR_MAP = {
'piechart-flows': ['rgb(153, 153, 255)', 'rgb(153, 204, 255)', 'rgb(255, 204, 153)', 'rgb(255, 255, 255)'],
'piechart-midstream-flows': ['rgb(255, 255, 153)', 'rgb(153, 153, 255)'],
'piechart-risky-flows': ['rgb(255, 0, 0)', 'rgb(255, 128, 0)', 'rgb(255, 255, 0)', 'rgb(128, 255, 0)', 'rgb(153, 153, 255)'],
'graph-flows': {'Current Active Flows': {'color': 'rgb(153, 153, 255)', 'width': 1},
'Current Risky Flows': {'color': 'rgb(255, 153, 153)', 'width': 3},
'Current Midstream Flows': {'color': 'rgb(255, 255, 153)', 'width': 3},
'Current Guessed Flows': {'color': 'rgb(153, 204, 255)', 'width': 1},
'Current Not-Detected Flows': {'color': 'rgb(255, 204, 153)', 'width': 1},
'Current Unclassified Flows': {'color': 'rgb(255, 255, 255)', 'width': 1},
},
}
def generate_tab_flow():
return html.Div([
html.Div(children=[
dcc.Interval(id="tab-flow-default-interval", interval=1 * 2000, n_intervals=0),
html.Div(children=[
dt.DataTable(
id='table-info',
columns=[{'id': c.lower(), 'name': c, 'editable': False}
for c in ['Name', 'Total']],
style_header={
'backgroundColor': '#082233',
'color': 'white'
},
style_data={
'backgroundColor': '#082244',
'color': 'white'
},
)
], style={'display': 'flex', 'flex-direction': 'row'}),
html.Div(children=[
dcc.Graph(
id='piechart-flows',
config={
'displayModeBar': False,
},
figure=build_piechart(['Detected', 'Guessed', 'Not-Detected', 'Unclassified'],
[0, 0, 0, 0], COLOR_MAP['piechart-flows']),
),
], style={'padding': 10, 'flex': 1}),
html.Div(children=[
dcc.Graph(
id='piechart-midstream-flows',
config={
'displayModeBar': False,
},
figure=build_piechart(['Midstream', 'Not Midstream'],
[0, 0], COLOR_MAP['piechart-midstream-flows']),
),
], style={'padding': 10, 'flex': 1}),
html.Div(children=[
dcc.Graph(
id='piechart-risky-flows',
config={
'displayModeBar': False,
},
figure=build_piechart(['Severy Risk', 'High Risk', 'Medium Risk', 'Low Risk', 'No Risk'],
[0, 0], COLOR_MAP['piechart-risky-flows']),
),
], style={'padding': 10, 'flex': 1}),
], style=generate_box()),
html.Div(children=[
dcc.Interval(id="tab-flow-graph-interval", interval=4 * 1000, n_intervals=0),
dcc.Store(id="graph-traces"),
html.Div(children=[
dcc.Graph(
id="graph-flows",
config={
'displayModeBar': True,
'displaylogo': False,
},
style={'height':'60vh'},
),
], style={'padding': 10, 'flex': 1})
], style=generate_box())
])
def generate_tab_other():
return html.Div([
html.Div(children=[
dcc.Interval(id="tab-other-default-interval", interval=1 * 2000, n_intervals=0),
html.Div(children=[
dcc.Graph(
id='piechart-events',
config={
'displayModeBar': False,
},
),
], style={'padding': 10, 'flex': 1}),
], style=generate_box())
])
TABS_STYLES = {
'height': '34px'
}
TAB_STYLE = {
'borderBottom': '1px solid #d6d6d6',
'backgroundColor': '#385285',
'padding': '6px',
'fontWeight': 'bold',
}
TAB_SELECTED_STYLE = {
'borderTop': '1px solid #d6d6d6',
'borderBottom': '1px solid #d6d6d6',
'backgroundColor': '#119DFF',
'color': 'white',
'padding': '6px'
}
app.layout = html.Div([
dcc.Tabs(id="tabs-flow-dash", value="tab-flows", children=[
dcc.Tab(label="Flow", value="tab-flows", style=TAB_STYLE,
selected_style=TAB_SELECTED_STYLE,
children=generate_tab_flow()),
dcc.Tab(label="Other", value="tab-other", style=TAB_STYLE,
selected_style=TAB_SELECTED_STYLE,
children=generate_tab_other()),
], style=TABS_STYLES),
html.Div(id="tabs-content")
])
def prettifyBytes(bytes_received):
size_names = ['B', 'KB', 'MB', 'GB', 'TB']
if bytes_received == 0:
i = 0
else:
i = min(int(math.floor(math.log(bytes_received, 1024))), len(size_names) - 1)
p = math.pow(1024, i)
s = round(bytes_received / p, 2)
return '{:.2f} {}'.format(s, size_names[i])
@app.callback(output=[Output('table-info', 'data'),
Output('piechart-flows', 'figure'),
Output('piechart-midstream-flows', 'figure'),
Output('piechart-risky-flows', 'figure')],
inputs=[Input('tab-flow-default-interval', 'n_intervals')])
def tab_flow_update_components(n):
return [[{'name': 'JSON Events', 'total': shared_flow_dict['total-events']},
{'name': 'JSON Bytes', 'total': prettifyBytes(shared_flow_dict['total-json-bytes'])},
{'name': 'Layer4 Bytes', 'total': prettifyBytes(shared_flow_dict['total-l4-bytes'])},
{'name': 'Flows', 'total': shared_flow_dict['total-flows']},
{'name': 'Risky Flows', 'total': shared_flow_dict['total-risky-flows']},
{'name': 'Midstream Flows', 'total': shared_flow_dict['total-midstream-flows']},
{'name': 'Guessed Flows', 'total': shared_flow_dict['total-guessed-flows']},
{'name': 'Not Detected Flows', 'total': shared_flow_dict['total-not-detected-flows']}],
build_piechart(['Detected', 'Guessed', 'Not-Detected', 'Unclassified'],
[shared_flow_dict['current-detected-flows'],
shared_flow_dict['current-guessed-flows'],
shared_flow_dict['current-not-detected-flows'],
shared_flow_dict['current-flows']
- shared_flow_dict['current-detected-flows']
- shared_flow_dict['current-guessed-flows']
- shared_flow_dict['current-not-detected-flows']],
COLOR_MAP['piechart-flows']),
build_piechart(['Midstream', 'Not Midstream'],
[shared_flow_dict['current-midstream-flows'],
shared_flow_dict['current-flows'] -
shared_flow_dict['current-midstream-flows']],
COLOR_MAP['piechart-midstream-flows']),
build_piechart(['Severe', 'High', 'Medium', 'Low', 'No Risk'],
[shared_flow_dict['current-risky-flows-severe'],
shared_flow_dict['current-risky-flows-high'],
shared_flow_dict['current-risky-flows-medium'],
shared_flow_dict['current-risky-flows-low'],
shared_flow_dict['current-flows'] -
shared_flow_dict['current-risky-flows']],
COLOR_MAP['piechart-risky-flows'])]
@app.callback(output=[Output('graph-flows', 'figure'),
Output('graph-traces', 'data')],
inputs=[Input('tab-flow-graph-interval', 'n_intervals'),
Input('tab-flow-graph-interval', 'interval')],
state=[State('graph-traces', 'data')])
def tab_flow_update_graph(n, i, traces):
if traces is None:
traces = ([], [], [], [], [], [])
max_bins = 75
traces[0].append(shared_flow_dict['current-flows'])
traces[1].append(shared_flow_dict['current-risky-flows'])
traces[2].append(shared_flow_dict['current-midstream-flows'])
traces[3].append(shared_flow_dict['current-guessed-flows'])
traces[4].append(shared_flow_dict['current-not-detected-flows'])
traces[5].append(shared_flow_dict['current-flows']
- shared_flow_dict['current-detected-flows']
- shared_flow_dict['current-guessed-flows']
- shared_flow_dict['current-not-detected-flows'])
if len(traces[0]) > max_bins:
traces[0] = traces[0][1:]
traces[1] = traces[1][1:]
traces[2] = traces[2][1:]
traces[3] = traces[3][1:]
traces[4] = traces[4][1:]
traces[5] = traces[5][1:]
i /= 1000.0
x = list(range(max(n - max_bins, 0) * int(i), n * int(i), max(int(i), 0)))
if len(x) > 0 and x[0] > 60:
x = [round(t / 60, 2) for t in x]
x_div = 60
x_axis_title = 'Time (min)'
else:
x_div = 1
x_axis_title = 'Time (sec)'
min_x = max(0, x[0] if len(x) >= max_bins else 0)
max_x = max((max_bins * i) / x_div, x[max_bins - 1] if len(x) >= max_bins else 0)
lay = dict(
plot_bgcolor = '#082255',
paper_bgcolor = '#082255',
font={"color": "#fff"},
xaxis = {
'title': x_axis_title,
"showgrid": False,
"showline": False,
"fixedrange": True,
"tickmode": 'linear',
"tick0": round(max_bins / x_div, 2),
"dtick": round(max_bins / x_div, 2),
},
yaxis = {
'title': 'Flow Count',
"showgrid": False,
"showline": False,
"zeroline": False,
"fixedrange": True,
"tickmode": 'linear',
"dtick": 10,
},
uirevision=True,
autosize=True,
bargap=0.01,
bargroupgap=0,
hovermode="closest",
margin = {'b': 0, 'l': 0, 'r': 0, 't': 30, 'pad': 0},
legend = {'borderwidth': 0},
)
fig = go.Figure(layout=lay)
fig.update_xaxes(showgrid=True, gridwidth=1, gridcolor='#004D80', zeroline=True, zerolinewidth=1, range=[min_x, max_x])
fig.update_yaxes(showgrid=True, gridwidth=1, gridcolor='#004D80', zeroline=True, zerolinewidth=1)
fig.add_trace(go.Scatter(
x=x,
y=traces[0],
name='Current Active Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Active Flows'],
))
fig.add_trace(go.Scatter(
x=x,
y=traces[1],
name='Current Risky Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Risky Flows'],
))
fig.add_trace(go.Scatter(
x=x,
y=traces[2],
name='Current Midstream Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Midstream Flows'],
))
fig.add_trace(go.Scatter(
x=x,
y=traces[3],
name='Current Guessed Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Guessed Flows'],
))
fig.add_trace(go.Scatter(
x=x,
y=traces[4],
name='Current Not-Detected Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Not-Detected Flows'],
))
fig.add_trace(go.Scatter(
x=x,
y=traces[5],
name='Current Unclassified Flows',
mode='lines+markers',
line=COLOR_MAP['graph-flows']['Current Unclassified Flows'],
))
return [fig, traces]
@app.callback(output=[Output('piechart-events', 'figure')],
inputs=[Input('tab-other-default-interval', 'n_intervals')])
def tab_other_update_components(n):
return [build_piechart(['Base', 'Daemon', 'Packet',
'Flow New', 'Flow Update', 'Flow Analyse', 'Flow End', 'Flow Idle',
'Flow Detection', 'Flow Detection-Updates', 'Flow Guessed', 'Flow Not-Detected'],
[shared_flow_dict['total-base-events'],
shared_flow_dict['total-daemon-events'],
shared_flow_dict['total-packet-events'],
shared_flow_dict['total-flow-new-events'],
shared_flow_dict['total-flow-update-events'],
shared_flow_dict['total-flow-analyse-events'],
shared_flow_dict['total-flow-end-events'],
shared_flow_dict['total-flow-idle-events'],
shared_flow_dict['total-flow-detected-events'],
shared_flow_dict['total-flow-detection-update-events'],
shared_flow_dict['total-flow-guessed-events'],
shared_flow_dict['total-flow-not-detected-events']])]
def web_worker(mp_shared_flow_dict, listen_host, listen_port):
global shared_flow_dict
shared_flow_dict = mp_shared_flow_dict
try:
app.run_server(debug=False, host=listen_host, port=listen_port)
except KeyboardInterrupt:
pass

View File

@@ -1,3 +0,0 @@
dash
dash_daq
Werkzeug==3.0.1

View File

@@ -380,7 +380,7 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
return True
ndpi_proto_categ_breed += '[' + str(json_dict['ndpi']['breed']) + ']'
if 'flow_risk' in json_dict['ndpi']:
if 'flow_risk' in json_dict['ndpi'] and args.hide_risk_info == False:
severity = 0
cnt = 0
@@ -408,7 +408,10 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
else:
color = ''
next_lines[0] = '{}{}{}: {}'.format(color, 'RISK', TermColor.END, next_lines[0][:-2])
if severity >= args.min_risk_severity:
next_lines[0] = '{}{}{}: {}'.format(color, 'RISK', TermColor.END, next_lines[0][:-2])
else:
del next_lines[0]
line_suffix = ''
flow_event_name = ''
@@ -498,6 +501,11 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
if args.print_hostname is True:
line_suffix += '[{}]'.format(json_dict['ndpi']['hostname'])
if args.skip_empty is True:
if json_dict['flow_src_tot_l4_payload_len'] == 0 or json_dict['flow_dst_tot_l4_payload_len'] == 0:
stats.printStatus()
return True
if args.print_bytes is True:
src_color = ''
dst_color = ''
@@ -520,9 +528,11 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
'[' + Stats.prettifyBytes(json_dict['flow_dst_packets_processed'], False) + ']'
if json_dict['l3_proto'] == 'ip4':
print('{}{}{}{}{}: [{:.>6}] [{}][{:.>5}] [{:.>15}]{} -> [{:.>15}]{}{}{}' \
print('{}{}{}{}{}: [{:.>6}]{} [{}][{:.>5}] [{:.>15}]{} -> [{:.>15}]{}{}{}' \
''.format(timestamp, first_seen, last_seen, instance_and_source, flow_event_name,
json_dict['flow_id'], json_dict['l3_proto'], json_dict['l4_proto'],
json_dict['flow_id'],
'[{:.>4}]'.format(json_dict['vlan_id']) if 'vlan_id' in json_dict else '',
json_dict['l3_proto'], json_dict['l4_proto'],
json_dict['src_ip'].lower(),
'[{:.>5}]'.format(json_dict['src_port']) if 'src_port' in json_dict else '',
json_dict['dst_ip'].lower(),
@@ -552,10 +562,14 @@ if __name__ == '__main__':
argparser = nDPIsrvd.defaultArgumentParser('Prettify and print events using the nDPIsrvd Python interface.', True)
argparser.add_argument('--no-color', action='store_true', default=False,
help='Disable all terminal colors.')
argparser.add_argument('--no-blink', action='store_true', default=False,
help='Disable all blink effects.')
argparser.add_argument('--no-statusbar', action='store_true', default=False,
help='Disable informational status bar.')
argparser.add_argument('--hide-instance-info', action='store_true', default=False,
help='Hide instance Alias/Source prefixed every line.')
argparser.add_argument('--hide-risk-info', action='store_true', default=False,
help='Skip printing risks.')
argparser.add_argument('--print-timestamp', action='store_true', default=False,
help='Print received event timestamps.')
argparser.add_argument('--print-first-seen', action='store_true', default=False,
@@ -566,6 +580,8 @@ if __name__ == '__main__':
help='Print received/transmitted source/dest bytes for every flow.')
argparser.add_argument('--print-packets', action='store_true', default=False,
help='Print received/transmitted source/dest packets for every flow.')
argparser.add_argument('--skip-empty', action='store_true', default=False,
help='Do not print flows that did not carry any layer7 payload.')
argparser.add_argument('--guessed', action='store_true', default=False, help='Print only guessed flow events.')
argparser.add_argument('--not-detected', action='store_true', default=False, help='Print only undetected flow events.')
argparser.add_argument('--detected', action='store_true', default=False, help='Print only detected flow events.')
@@ -587,11 +603,15 @@ if __name__ == '__main__':
argparser.add_argument('--ignore-category', action='append', help='Ignore printing lines with a certain category.')
argparser.add_argument('--ignore-breed', action='append', help='Ignore printing lines with a certain breed.')
argparser.add_argument('--ignore-hostname', action='append', help='Ignore printing lines with a certain hostname.')
argparser.add_argument('--min-risk-severity', action='store', type=int, default=0, help='Print only risks with a risk severity greater or equal to the given argument')
args = argparser.parse_args()
if args.no_color is True:
TermColor.disableColor()
if args.no_blink is True:
TermColor.disableBlink()
if args.ipwhois is True:
import dns, ipwhois
whois_db = dict()

View File

@@ -29,12 +29,12 @@ import nDPIsrvd
from nDPIsrvd import nDPIsrvdSocket, TermColor
INPUT_SIZE = nDPIsrvd.nDPId_PACKETS_PLEN_MAX
LATENT_SIZE = 16
TRAINING_SIZE = 8192
EPOCH_COUNT = 50
BATCH_SIZE = 512
LEARNING_RATE = 0.0000001
ES_PATIENCE = 10
LATENT_SIZE = 8
TRAINING_SIZE = 512
EPOCH_COUNT = 3
BATCH_SIZE = 16
LEARNING_RATE = 0.000001
ES_PATIENCE = 3
PLOT = False
PLOT_HISTORY = 100
TENSORBOARD = False
@@ -164,8 +164,12 @@ def keras_worker(load_model, save_model, shared_shutdown_event, shared_training_
sys.stderr.flush()
encoder, _, autoencoder = get_autoencoder()
autoencoder.summary()
tensorboard = TensorBoard(log_dir=TB_LOGPATH, histogram_freq=1)
additional_callbacks = []
if TENSORBOARD is True:
tensorboard = TensorBoard(log_dir=TB_LOGPATH, histogram_freq=1)
additional_callbacks += [tensorboard]
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=ES_PATIENCE, restore_best_weights=True, start_from_epoch=0, verbose=0, mode='auto')
additional_callbacks += [early_stopping]
shared_training_event.clear()
try:
@@ -188,7 +192,7 @@ def keras_worker(load_model, save_model, shared_shutdown_event, shared_training_
tmp, tmp, epochs=EPOCH_COUNT, batch_size=BATCH_SIZE,
validation_split=0.2,
shuffle=True,
callbacks=[tensorboard, early_stopping]
callbacks=[additional_callbacks]
)
reconstructed_data = autoencoder.predict(tmp)
mse = np.mean(np.square(tmp - reconstructed_data))
@@ -295,15 +299,15 @@ if __name__ == '__main__':
help='Load a pre-trained model file.')
argparser.add_argument('--save-model', action='store',
help='Save the trained model to a file.')
argparser.add_argument('--training-size', action='store', default=TRAINING_SIZE,
argparser.add_argument('--training-size', action='store', type=int, default=TRAINING_SIZE,
help='Set the amount of captured packets required to start the training phase.')
argparser.add_argument('--batch-size', action='store', default=BATCH_SIZE,
argparser.add_argument('--batch-size', action='store', type=int, default=BATCH_SIZE,
help='Set the batch size used for the training phase.')
argparser.add_argument('--learning-rate', action='store', default=LEARNING_RATE,
argparser.add_argument('--learning-rate', action='store', type=float, default=LEARNING_RATE,
help='Set the (initial) learning rate for the optimizer.')
argparser.add_argument('--plot', action='store_true', default=PLOT,
help='Show some model metrics using pyplot.')
argparser.add_argument('--plot-history', action='store', default=PLOT_HISTORY,
argparser.add_argument('--plot-history', action='store', type=int, default=PLOT_HISTORY,
help='Set the history size of Line plots. Requires --plot')
argparser.add_argument('--tensorboard', action='store_true', default=TENSORBOARD,
help='Enable TensorBoard compatible logging callback.')
@@ -313,7 +317,7 @@ if __name__ == '__main__':
help='Use SGD optimizer instead of Adam.')
argparser.add_argument('--use-kldiv', action='store_true', default=VAE_USE_KLDIV,
help='Use Kullback-Leibler loss function instead of Mean-Squared-Error.')
argparser.add_argument('--patience', action='store', default=ES_PATIENCE,
argparser.add_argument('--patience', action='store', type=int, default=ES_PATIENCE,
help='Epoch value for EarlyStopping. This value forces VAE fitting to if no improvment achieved.')
args = argparser.parse_args()
address = nDPIsrvd.validateAddress(args)

View File

@@ -86,8 +86,8 @@ def verifyFlows(nsock, instance):
l4_proto = 'n/a'
invalid_flows_str += '{} proto[{},{}] ts[{} + {} < {}] diff[{}], '.format(flow_id, l4_proto, flow.flow_idle_time,
flow.flow_last_seen, flow.flow_idle_time,
instance.most_recent_flow_time,
instance.most_recent_flow_time -
instance.getMostRecentFlowTime(flow.thread_id),
instance.getMostRecentFlowTime(flow.thread_id) -
(flow.flow_last_seen + flow.flow_idle_time))
raise SemanticValidationException(None, 'Flow Manager verification failed for: {}'.format(invalid_flows_str[:-2]))
@@ -193,7 +193,7 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
if (flow_last_seen is not None and 'flow_idle_time' not in json_dict) or \
(flow_last_seen is None and 'flow_idle_time' in json_dict):
raise SemanticValidationException(current_flow,
'Got a JSON string with only 2 of 3 keys, ' \
'Got a JSON message with only 2 of 3 keys, ' \
'required for timeout handling: flow_idle_time')
if 'thread_ts_usec' in json_dict:
@@ -213,7 +213,7 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
try:
if current_flow.flow_ended == True:
raise SemanticValidationException(current_flow,
'Received JSON string for a flow that already ended/idled.')
'Received JSON message for a flow that already ended/idled.')
except AttributeError:
pass
@@ -256,10 +256,25 @@ def onJsonLineRecvd(json_dict, instance, current_flow, global_user_data):
except AttributeError:
pass
try:
if current_flow.flow_finished == True and \
json_dict['flow_event_name'] == 'detection-update':
raise SemanticValidationException(current_flow,
'Flow state already finished, but another detection-update received.')
except AttributeError:
pass
try:
if json_dict['flow_state'] == 'finished':
current_flow.flow_finished = True
elif json_dict['flow_state'] == 'info' and \
current_flow.flow_finished is True:
raise SemanticValidationException(current_flow,
'Flow state already finished, but switched back to info state.')
except AttributeError:
pass
try:
if current_flow.flow_finished == True and \
json_dict['flow_event_name'] != 'analyse' and \
json_dict['flow_event_name'] != 'update' and \

View File

@@ -0,0 +1,21 @@
[package]
name = "rs-simple"
version = "0.1.0"
authors = ["Toni Uhlig <toni@impl.cc>"]
edition = "2024"
[dependencies]
argh = "0.1"
bytes = "1"
crossterm = "0.29.0"
io = "0.0.2"
moka = { version = "0.12.10", features = ["future"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
tui = "0.19.0"
[profile.release]
strip = true
lto = true
codegen-units = 1

View File

@@ -0,0 +1,860 @@
use argh::FromArgs;
use bytes::BytesMut;
use crossterm::{
cursor,
event::{self, KeyCode, KeyEvent},
ExecutableCommand,
terminal::{self, ClearType},
};
use moka::{future::Cache, Expiry};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::{
collections::HashMap,
fmt,
hash::{Hash, Hasher},
io::self,
sync::Arc,
time::{Duration, Instant, SystemTime, UNIX_EPOCH},
};
use tokio::io::AsyncReadExt;
use tokio::sync::mpsc;
use tokio::sync::Mutex;
use tokio::sync::MutexGuard;
use tokio::net::TcpStream;
use tui::{
backend::CrosstermBackend,
layout::{Layout, Constraint, Direction},
style::{Style, Color, Modifier},
Terminal,
widgets::{Block, Borders, List, ListItem, Row, Table, TableState},
};
#[derive(FromArgs, Debug)]
/// Simple Rust nDPIsrvd Client Example
struct Args {
/// nDPIsrvd host(s) to connect to
#[argh(option)]
host: Vec<String>,
}
#[derive(Debug)]
enum ParseError {
Protocol(),
Json(),
Schema(),
}
impl From<serde_json::Error> for ParseError {
fn from(_: serde_json::Error) -> Self {
ParseError::Json()
}
}
#[derive(Serialize, Deserialize, Debug, PartialEq)]
#[serde(rename_all = "lowercase")]
enum EventName {
Invalid, New, End, Idle, Update, Analyse,
Guessed, Detected,
#[serde(rename = "detection-update")]
DetectionUpdate,
#[serde(rename = "not-detected")]
NotDetected,
}
#[derive(Serialize, Deserialize, Copy, Clone, Debug)]
#[serde(rename_all = "lowercase")]
enum State {
Unknown, Info, Finished,
}
#[derive(Serialize, Deserialize, Debug)]
struct FlowEventNdpiFlowRisk {
#[serde(rename = "risk")]
risk: String,
}
#[derive(Serialize, Deserialize, Debug)]
struct FlowEventNdpi {
#[serde(rename = "proto")]
proto: String,
#[serde(rename = "flow_risk")]
risks: Option<HashMap<String, FlowEventNdpiFlowRisk>>,
}
#[derive(Serialize, Deserialize, Debug)]
struct FlowEvent {
#[serde(rename = "flow_event_name")]
name: EventName,
#[serde(rename = "flow_id")]
id: u64,
#[serde(rename = "alias")]
alias: String,
#[serde(rename = "source")]
source: String,
#[serde(rename = "thread_id")]
thread_id: u64,
#[serde(rename = "flow_state")]
state: State,
#[serde(rename = "flow_first_seen")]
first_seen: u64,
#[serde(rename = "flow_src_last_pkt_time")]
src_last_pkt_time: u64,
#[serde(rename = "flow_dst_last_pkt_time")]
dst_last_pkt_time: u64,
#[serde(rename = "flow_idle_time")]
idle_time: u64,
#[serde(rename = "flow_src_packets_processed")]
src_packets_processed: u64,
#[serde(rename = "flow_dst_packets_processed")]
dst_packets_processed: u64,
#[serde(rename = "flow_src_tot_l4_payload_len")]
src_tot_l4_payload_len: u64,
#[serde(rename = "flow_dst_tot_l4_payload_len")]
dst_tot_l4_payload_len: u64,
#[serde(rename = "l3_proto")]
l3_proto: String,
#[serde(rename = "l4_proto")]
l4_proto: String,
#[serde(rename = "ndpi")]
ndpi: Option<FlowEventNdpi>,
}
#[derive(Serialize, Deserialize, Debug)]
struct PacketEvent {
pkt_datalink: u16,
pkt_caplen: u64,
pkt_len: u64,
pkt_l4_len: u64,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
struct DaemonEventStatus {
#[serde(rename = "alias")]
alias: String,
#[serde(rename = "source")]
source: String,
#[serde(rename = "thread_id")]
thread_id: u64,
#[serde(rename = "packets-captured")]
packets_captured: u64,
#[serde(rename = "packets-processed")]
packets_processed: u64,
#[serde(rename = "total-skipped-flows")]
total_skipped_flows: u64,
#[serde(rename = "total-l4-payload-len")]
total_l4_payload_len: u64,
#[serde(rename = "total-not-detected-flows")]
total_not_detected_flows: u64,
#[serde(rename = "total-guessed-flows")]
total_guessed_flows: u64,
#[serde(rename = "total-detected-flows")]
total_detected_flows: u64,
#[serde(rename = "total-detection-updates")]
total_detection_updates: u64,
#[serde(rename = "total-updates")]
total_updates: u64,
#[serde(rename = "current-active-flows")]
current_active_flows: u64,
#[serde(rename = "total-active-flows")]
total_active_flows: u64,
#[serde(rename = "total-idle-flows")]
total_idle_flows: u64,
#[serde(rename = "total-compressions")]
total_compressions: u64,
#[serde(rename = "total-compression-diff")]
total_compression_diff: u64,
#[serde(rename = "current-compression-diff")]
current_compression_diff: u64,
#[serde(rename = "global-alloc-bytes")]
global_alloc_bytes: u64,
#[serde(rename = "global-alloc-count")]
global_alloc_count: u64,
#[serde(rename = "global-free-bytes")]
global_free_bytes: u64,
#[serde(rename = "global-free-count")]
global_free_count: u64,
#[serde(rename = "total-events-serialized")]
total_events_serialized: u64,
}
#[derive(Debug)]
enum EventType {
Flow(FlowEvent),
Packet(PacketEvent),
DaemonStatus(DaemonEventStatus),
Other(),
}
#[derive(Default)]
struct Stats {
ui_updates: u64,
flow_count: u64,
parse_errors: u64,
events: u64,
flow_events: u64,
packet_events: u64,
daemon_events: u64,
packet_events_total_caplen: u64,
packet_events_total_len: u64,
packet_events_total_l4_len: u64,
packets_captured: u64,
packets_processed: u64,
flows_total_skipped: u64,
flows_total_l4_payload_len: u64,
flows_total_not_detected: u64,
flows_total_guessed: u64,
flows_current_active: u64,
flows_total_compressions: u64,
flows_total_compression_diff: u64,
flows_current_compression_diff: u64,
global_alloc_bytes: u64,
global_alloc_count: u64,
global_free_bytes: u64,
global_free_count: u64,
total_events_serialized: u64,
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
enum FlowExpiration {
IdleTime(u64),
}
struct FlowExpiry;
#[derive(Clone, Eq, Default, Debug)]
struct FlowKey {
id: u64,
alias: String,
source: String,
thread_id: u64,
}
#[derive(Clone, Debug)]
struct FlowValue {
state: State,
total_src_packets: u64,
total_dst_packets: u64,
total_src_bytes: u64,
total_dst_bytes: u64,
first_seen: std::time::SystemTime,
last_seen: std::time::SystemTime,
timeout_in: std::time::SystemTime,
risks: usize,
proto: String,
app_proto: String,
}
#[derive(Clone, Eq, Default, Debug)]
struct DaemonKey {
alias: String,
source: String,
thread_id: u64,
}
impl Default for State {
fn default() -> State {
State::Unknown
}
}
impl fmt::Display for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
State::Unknown => write!(f, "N/A"),
State::Info => write!(f, "Info"),
State::Finished => write!(f, "Finished"),
}
}
}
impl FlowExpiration {
fn as_duration(&self) -> Option<Duration> {
match self {
FlowExpiration::IdleTime(value) => Some(Duration::from_micros(*value)),
}
}
}
impl fmt::Display for FlowExpiration {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.as_duration() {
Some(duration) => {
let secs = duration.as_secs();
write!(f, "{} s", secs)
}
None => write!(f, "N/A"),
}
}
}
impl Expiry<FlowKey, (FlowExpiration, FlowValue)> for FlowExpiry {
fn expire_after_create(
&self,
_key: &FlowKey,
value: &(FlowExpiration, FlowValue),
_current_time: Instant,
) -> Option<Duration> {
value.0.as_duration()
}
}
impl Hash for FlowKey {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
self.alias.hash(state);
self.source.hash(state);
self.thread_id.hash(state);
}
}
impl PartialEq for FlowKey {
fn eq(&self, other: &Self) -> bool {
self.id == other.id &&
self.alias == other.alias &&
self.source == other.source &&
self.thread_id == other.thread_id
}
}
impl Hash for DaemonKey {
fn hash<H: Hasher>(&self, state: &mut H) {
self.alias.hash(state);
self.source.hash(state);
self.thread_id.hash(state);
}
}
impl PartialEq for DaemonKey {
fn eq(&self, other: &Self) -> bool {
self.alias == other.alias &&
self.source == other.source &&
self.thread_id == other.thread_id
}
}
#[tokio::main]
async fn main() {
let args: Args = argh::from_env();
if args.host.len() == 0 {
eprintln!("At least one --host required");
return;
}
let mut connections: Vec<TcpStream> = Vec::new();
for host in args.host {
match TcpStream::connect(host.clone()).await {
Ok(stream) => {
connections.push(stream);
}
Err(e) => {
eprintln!("Fehler bei Verbindung zu {}: {}", host, e);
}
}
}
if let Err(e) = terminal::enable_raw_mode() {
eprintln!("Could not enable terminal raw mode: {}", e);
return;
}
let mut stdout = io::stdout();
if let Err(e) = stdout.execute(terminal::Clear(ClearType::All)) {
eprintln!("Could not clear your terminal: {}", e);
return;
}
if let Err(e) = stdout.execute(cursor::Hide) {
eprintln!("Could not hide your cursor: {}", e);
return;
}
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend);
let (tx, mut rx): (mpsc::Sender<String>, mpsc::Receiver<String>) = mpsc::channel(1024);
let data = Arc::new(Mutex::new(Stats::default()));
let data_tx = Arc::clone(&data);
let data_rx = Arc::clone(&data);
let flow_cache: Arc<Cache<FlowKey, (FlowExpiration, FlowValue)>> = Arc::new(Cache::builder()
.expire_after(FlowExpiry)
.build());
let flow_cache_rx = Arc::clone(&flow_cache);
let daemon_cache: Arc<Cache<DaemonKey, DaemonEventStatus>> = Arc::new(Cache::builder()
.time_to_live(Duration::from_secs(1800))
.build());
tokio::spawn(async move {
while let Some(msg) = rx.recv().await {
match parse_json(&msg) {
Ok(message) => {
let mut data_lock = data_tx.lock().await;
data_lock.events += 1;
update_stats(&message, &mut data_lock, &flow_cache, &daemon_cache).await;
}
Err(_message) => {
let mut data_lock = data_tx.lock().await;
data_lock.parse_errors += 1;
}
}
}
});
for mut stream in connections {
let cloned_tx = tx.clone();
tokio::spawn(async move {
let mut buffer = BytesMut::with_capacity(33792usize);
loop {
let n = match stream.read_buf(&mut buffer).await {
Ok(len) => len,
Err(_) => {
continue; // Versuche es erneut, wenn ein Fehler auftritt
}
};
if n == 0 {
break;
}
while let Some(message) = parse_message(&mut buffer) {
match cloned_tx.send(message).await {
Ok(_) => (),
Err(_) => return
}
}
}
});
}
let mut table_state = TableState::default();
let mut old_selected: Option<FlowKey> = None;
loop {
let flows: Vec<(FlowKey, (FlowExpiration, FlowValue))> = flow_cache_rx.iter().map(|(k, v)| (k.as_ref().clone(), v.clone()))
.take(128)
.collect();
let mut table_selected = match table_state.selected() {
Some(mut table_index) => {
if table_index >= flows.len() {
flows.len().saturating_sub(1)
} else {
if let Some(ref old_flow_key_selected) = old_selected {
if let Some(old_index) = flows.iter().position(|x| x.0 == *old_flow_key_selected) {
if old_index != table_index {
table_index = old_index;
}
} else {
old_selected = Some(flows.get(table_index).unwrap().0.clone());
}
}
table_index
}
}
None => 0,
};
match read_keypress() {
Some(KeyCode::Esc) => break,
Some(KeyCode::Char('q')) => break,
Some(KeyCode::Up) => {
table_selected = match table_selected {
i if i == 0 => flows.len().saturating_sub(1),
i => i - 1,
};
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(KeyCode::Down) => {
table_selected = match table_selected {
i if i >= flows.len().saturating_sub(1) => 0,
i => i + 1,
};
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(KeyCode::PageUp) => {
table_selected = match table_selected {
i if i == 0 => flows.len().saturating_sub(1),
i if i < 25 => 0,
i => i - 25,
};
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(KeyCode::PageDown) => {
table_selected = match table_selected {
i if i >= flows.len().saturating_sub(1) => 0,
i if i >= flows.len().saturating_sub(25) => flows.len().saturating_sub(1),
i => i + 25,
};
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(KeyCode::Home) => {
table_selected = 0;
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(KeyCode::End) => {
table_selected = match table_selected {
_ => flows.len().saturating_sub(1),
};
if let Some(new_selected) = flows.get(table_selected) {
old_selected = Some(new_selected.0.clone());
}
},
Some(_) => (),
None => ()
};
let mut data_lock = data_rx.lock().await;
data_lock.ui_updates += 1;
draw_ui(terminal.as_mut().unwrap(), &mut table_state, table_selected, &data_lock, &flows);
}
if let Err(e) = terminal.unwrap().backend_mut().execute(cursor::Show) {
eprintln!("Could not show your cursor: {}", e);
return;
}
let mut stdout = io::stdout();
if let Err(e) = stdout.execute(terminal::Clear(ClearType::All)) {
eprintln!("Could not clear your terminal: {}", e);
return;
}
if let Err(e) = terminal::disable_raw_mode() {
eprintln!("Could not disable raw mode: {}", e);
return;
}
println!("\nDone.");
}
fn read_keypress() -> Option<KeyCode> {
if event::poll(Duration::from_millis(1000)).unwrap() {
if let event::Event::Key(KeyEvent { code, .. }) = event::read().unwrap() {
return Some(code);
}
}
None
}
fn parse_message(buffer: &mut BytesMut) -> Option<String> {
if let Some(pos) = buffer.iter().position(|&b| b == b'\n') {
let message = buffer.split_to(pos + 1);
return Some(String::from_utf8_lossy(&message).to_string());
}
None
}
fn parse_json(data: &str) -> Result<EventType, ParseError> {
let first_non_digit = data.find(|c: char| !c.is_ascii_digit()).unwrap_or(0);
let length_str = &data[0..first_non_digit];
let length: usize = length_str.parse().unwrap_or(0);
if length == 0 {
return Err(ParseError::Protocol());
}
let json_str = &data[first_non_digit..first_non_digit + length];
let value: Value = serde_json::from_str(json_str).map_err(|_| ParseError::Json()).unwrap();
if value.get("flow_event_name").is_some() {
let flow_event: FlowEvent = serde_json::from_value(value)?;
return Ok(EventType::Flow(flow_event));
} else if value.get("packet_event_name").is_some() {
let packet_event: PacketEvent = serde_json::from_value(value)?;
return Ok(EventType::Packet(packet_event));
} else if value.get("daemon_event_name").is_some() {
if value.get("daemon_event_name").unwrap() == "status" ||
value.get("daemon_event_name").unwrap() == "shutdown"
{
let daemon_status_event: DaemonEventStatus = serde_json::from_value(value)?;
return Ok(EventType::DaemonStatus(daemon_status_event));
}
return Ok(EventType::Other());
} else if value.get("error_event_name").is_some() {
return Ok(EventType::Other());
}
Err(ParseError::Schema())
}
async fn update_stats(event: &EventType, stats: &mut MutexGuard<'_, Stats>, cache: &Cache<FlowKey, (FlowExpiration, FlowValue)>, daemon_cache: &Cache<DaemonKey, DaemonEventStatus>) {
match &event {
EventType::Flow(flow_event) => {
stats.flow_events += 1;
stats.flow_count = cache.entry_count();
let key = FlowKey { id: flow_event.id, alias: flow_event.alias.to_string(),
source: flow_event.source.to_string(), thread_id: flow_event.thread_id };
if flow_event.name == EventName::End ||
flow_event.name == EventName::Idle
{
cache.remove(&key).await;
return;
}
let first_seen_seconds = flow_event.first_seen / 1_000_000;
let first_seen_nanos = (flow_event.first_seen % 1_000_000) * 1_000;
let first_seen_epoch = std::time::Duration::new(first_seen_seconds, first_seen_nanos as u32);
let first_seen_system = UNIX_EPOCH + first_seen_epoch;
let last_seen = std::cmp::max(flow_event.src_last_pkt_time,
flow_event.dst_last_pkt_time);
let last_seen_seconds = last_seen / 1_000_000;
let last_seen_nanos = (last_seen % 1_000_000) * 1_000;
let last_seen_epoch = std::time::Duration::new(last_seen_seconds, last_seen_nanos as u32);
let last_seen_system = UNIX_EPOCH + last_seen_epoch;
let timeout_seconds = (last_seen + flow_event.idle_time) / 1_000_000;
let timeout_nanos = ((last_seen + flow_event.idle_time) % 1_000_000) * 1_000;
let timeout_epoch = std::time::Duration::new(timeout_seconds, timeout_nanos as u32);
let timeout_system = UNIX_EPOCH + timeout_epoch;
let risks = match &flow_event.ndpi {
None => 0,
Some(ndpi) => match &ndpi.risks {
None => 0,
Some(risks) => risks.len(),
},
};
let app_proto = match &flow_event.ndpi {
None => "-",
Some(ndpi) => &ndpi.proto,
};
let value = FlowValue {
state: flow_event.state,
total_src_packets: flow_event.src_packets_processed,
total_dst_packets: flow_event.dst_packets_processed,
total_src_bytes: flow_event.src_tot_l4_payload_len,
total_dst_bytes: flow_event.dst_tot_l4_payload_len,
first_seen: first_seen_system,
last_seen: last_seen_system,
timeout_in: timeout_system,
risks: risks,
proto: flow_event.l3_proto.to_string() + "/" + &flow_event.l4_proto,
app_proto: app_proto.to_string(),
};
cache.insert(key, (FlowExpiration::IdleTime(flow_event.idle_time), value)).await;
}
EventType::Packet(packet_event) => {
stats.packet_events += 1;
stats.packet_events_total_caplen += packet_event.pkt_caplen;
stats.packet_events_total_len += packet_event.pkt_len;
stats.packet_events_total_l4_len += packet_event.pkt_l4_len;
}
EventType::DaemonStatus(daemon_status_event) => {
let key = DaemonKey { alias: daemon_status_event.alias.to_string(),
source: daemon_status_event.source.to_string(),
thread_id: daemon_status_event.thread_id };
stats.daemon_events += 1;
daemon_cache.insert(key, daemon_status_event.clone()).await;
stats.packets_captured = 0;
stats.packets_processed = 0;
stats.flows_total_skipped = 0;
stats.flows_total_l4_payload_len = 0;
stats.flows_total_not_detected = 0;
stats.flows_total_guessed = 0;
stats.flows_current_active = 0;
stats.flows_total_compressions = 0;
stats.flows_total_compression_diff = 0;
stats.flows_current_compression_diff = 0;
stats.global_alloc_bytes = 0;
stats.global_alloc_count = 0;
stats.global_free_bytes = 0;
stats.global_free_count = 0;
stats.total_events_serialized = 0;
let daemons: Vec<DaemonEventStatus> = daemon_cache.iter().map(|(_, v)| (v.clone())).collect();
for daemon in daemons {
stats.packets_captured += daemon.packets_captured;
stats.packets_processed += daemon.packets_processed;
stats.flows_total_skipped += daemon.total_skipped_flows;
stats.flows_total_l4_payload_len += daemon.total_l4_payload_len;
stats.flows_total_not_detected += daemon.total_not_detected_flows;
stats.flows_total_guessed += daemon.total_guessed_flows;
stats.flows_current_active += daemon.current_active_flows;
stats.flows_total_compressions += daemon.total_compressions;
stats.flows_total_compression_diff += daemon.total_compression_diff;
stats.flows_current_compression_diff += daemon.current_compression_diff;
stats.global_alloc_bytes += daemon.global_alloc_bytes;
stats.global_alloc_count += daemon.global_alloc_count;
stats.global_free_bytes += daemon.global_free_bytes;
stats.global_free_count += daemon.global_free_count;
stats.total_events_serialized += daemon.total_events_serialized;
}
}
EventType::Other() => {}
}
}
fn format_bytes(bytes: u64) -> String {
const KB: u64 = 1024;
const MB: u64 = KB * 1024;
const GB: u64 = MB * 1024;
if bytes >= GB {
format!("{} GB", bytes / GB)
} else if bytes >= MB {
format!("{} MB", bytes / MB)
} else if bytes >= KB {
format!("{} kB", bytes / KB)
} else {
format!("{} B", bytes)
}
}
fn draw_ui<B: tui::backend::Backend>(terminal: &mut Terminal<B>, table_state: &mut TableState, table_selected: usize, data: &MutexGuard<Stats>, flows: &Vec<(FlowKey, (FlowExpiration, FlowValue))>) {
let general_items = vec![
ListItem::new("TUI Updates..: ".to_owned() + &data.ui_updates.to_string()),
ListItem::new("Flows Cached.: ".to_owned() + &data.flow_count.to_string()),
ListItem::new("Total Events.: ".to_owned() + &data.events.to_string()),
ListItem::new("Parse Errors.: ".to_owned() + &data.parse_errors.to_string()),
ListItem::new("Flow Events..: ".to_owned() + &data.flow_events.to_string()),
];
let packet_items = vec![
ListItem::new("Total Events........: ".to_owned() + &data.packet_events.to_string()),
ListItem::new("Total Capture Length: ".to_owned() + &format_bytes(data.packet_events_total_caplen)),
ListItem::new("Total Length........: ".to_owned() + &format_bytes(data.packet_events_total_len)),
ListItem::new("Total L4 Length.....: ".to_owned() + &format_bytes(data.packet_events_total_l4_len)),
];
let daemon_items = vec![
ListItem::new("Total Events.............: ".to_owned() + &data.daemon_events.to_string()),
ListItem::new("Total Packets Captured...: ".to_owned() + &data.packets_captured.to_string()),
ListItem::new("Total Packets Processed..: ".to_owned() + &data.packets_processed.to_string()),
ListItem::new("Total Flows Skipped......: ".to_owned() + &data.flows_total_skipped.to_string()),
ListItem::new("Total Flows Not-Detected.: ".to_owned() + &data.flows_total_not_detected.to_string()),
ListItem::new("Total Compressions/Memory: ".to_owned() + &data.flows_total_compressions.to_string()
+ " / " + &format_bytes(data.flows_total_compression_diff) + " deflate"),
ListItem::new("Total Memory in Use......: ".to_owned() + &format_bytes(data.global_alloc_bytes - data.global_free_bytes)
+ " (" + &format_bytes(data.flows_current_compression_diff) + " deflate)"),
ListItem::new("Total Events Serialized..: ".to_owned() + &data.total_events_serialized.to_string()),
ListItem::new("Current Flows Active.....: ".to_owned() + &data.flows_current_active.to_string()),
];
let table_rows: Vec<Row> = flows
.into_iter()
.map(|(key, (_exp, val))| {
let first_seen_display = match val.first_seen.elapsed() {
Ok(elapsed) => {
match elapsed.as_secs() {
t if t > (3_600 * 24) => format!("{} d ago", t / (3_600 * 24)),
t if t > 3_600 => format!("{} h ago", t / 3_600),
t if t > 60 => format!("{} min ago", t / 60),
t if t > 0 => format!("{} s ago", t),
t if t == 0 => "< 1 s ago".to_string(),
t => format!("INVALID: {}", t),
}
}
Err(err) => format!("ERROR: {}", err)
};
let last_seen_display = match val.last_seen.elapsed() {
Ok(elapsed) => {
match elapsed.as_secs() {
t if t > (3_600 * 24) => format!("{} d ago", t / (3_600 * 24)),
t if t > 3_600 => format!("{} h ago", t / 3_600),
t if t > 60 => format!("{} min ago", t / 60),
t if t > 0 => format!("{} s ago", t),
t if t == 0 => "< 1 s ago".to_string(),
t => format!("INVALID: {}", t),
}
}
Err(_err) => "ERROR".to_string()
};
let timeout_display = match val.timeout_in.duration_since(SystemTime::now()) {
Ok(elapsed) => {
match elapsed.as_secs() {
t if t > (3_600 * 24) => format!("in {} d", t / (3_600 * 24)),
t if t > 3_600 => format!("in {} h", t / 3_600),
t if t > 60 => format!("in {} min", t / 60),
t if t > 0 => format!("in {} s", t),
t if t == 0 => "in < 1 s".to_string(),
t => format!("INVALID: {}", t),
}
}
Err(_err) => "EXPIRED".to_string()
};
Row::new(vec![
key.id.to_string(),
val.state.to_string(),
first_seen_display,
last_seen_display,
timeout_display,
(val.total_src_packets + val.total_dst_packets).to_string(),
format_bytes(val.total_src_bytes + val.total_dst_bytes),
val.risks.to_string(),
val.proto.to_string(),
val.app_proto.to_string(),
])
})
.collect();
terminal.draw(|f| {
let size = f.size();
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints(
[
Constraint::Length(11),
Constraint::Percentage(100),
].as_ref()
)
.split(size);
let top_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints(
[
Constraint::Percentage(25),
Constraint::Percentage(30),
Constraint::Percentage(55),
].as_ref()
)
.split(chunks[0]);
let table_selected_abs = match table_selected {
_ if flows.len() == 0 => 0,
i => i + 1,
};
let table = Table::new(table_rows)
.header(Row::new(vec!["Flow ID", "State", "First Seen", "Last Seen", "Timeout", "Total Packets", "Total Bytes", "Risks", "L3/L4", "L7"])
.style(Style::default().fg(Color::Yellow).add_modifier(Modifier::BOLD)))
.block(Block::default().title("Flow Table (selected: ".to_string() +
&table_selected_abs.to_string() +
"): " +
&flows.len().to_string() +
" item(s)").borders(Borders::ALL))
.highlight_style(Style::default().bg(Color::Blue))
.widths(&[
Constraint::Length(10),
Constraint::Length(10),
Constraint::Length(12),
Constraint::Length(12),
Constraint::Length(10),
Constraint::Length(13),
Constraint::Length(12),
Constraint::Length(6),
Constraint::Length(12),
Constraint::Length(15),
]);
let general_list = List::new(general_items)
.block(Block::default().title("General").borders(Borders::ALL));
let packet_list = List::new(packet_items)
.block(Block::default().title("Packet Events").borders(Borders::ALL));
let daemon_list = List::new(daemon_items)
.block(Block::default().title("Daemon Events").borders(Borders::ALL));
table_state.select(Some(table_selected));
f.render_widget(general_list, top_chunks[0]);
f.render_widget(packet_list, top_chunks[1]);
f.render_widget(daemon_list, top_chunks[2]);
f.render_stateful_widget(table, chunks[1], table_state);
}).unwrap();
}

View File

@@ -0,0 +1,28 @@
filebeat.inputs:
- type: unix
id: "NDPId-logs" # replace this index to your preference
max_message_size: 100MiB
index: "index-name" # Replace this with your desired index name in Elasticsearch
enabled: true
path: "/var/run/nDPId.sock" # point nDPId to this Unix Socket (Collector)
processors:
- script: # execute javascript to remove the first 5-digit-number and also the Newline at the end
lang: javascript
id: trim
source: >
function process(event) {
event.Put("message", event.Get("message").trim().slice(5));
}
- decode_json_fields: # Decode the Json output
fields: ["message"]
process_array: true
max_depth: 10
target: ""
overwrite_keys: true
add_error_key: false
- drop_fields: # Deletes the Message field, which is the undecoded json (You may comment this out if you need the original message)
fields: ["message"]
- rename:
fields:
- from: "source" # Prevents a conflict in Elasticsearch and renames the field
to: "Source_Interface"

Submodule libnDPI updated: b08c787fe2...75db1a8a66

View File

@@ -3,11 +3,11 @@
#include <stdarg.h>
#include <unistd.h>
static void nDPIsrvd_memprof_log(char const * const format, ...);
static void nDPIsrvd_memprof_log_alloc(size_t alloc_size);
static void nDPIsrvd_memprof_log_free(size_t free_size);
extern void nDPIsrvd_memprof_log(char const * const format, ...);
extern void nDPIsrvd_memprof_log_alloc(size_t alloc_size);
extern void nDPIsrvd_memprof_log_free(size_t free_size);
//#define VERBOSE_MEMORY_PROFILING 1
// #define VERBOSE_MEMORY_PROFILING 1
#define NO_MAIN 1
#include "utils.c"
#include "nio.c"
@@ -44,6 +44,8 @@ struct nDPId_return_value
{
struct thread_return_value thread_return_value;
size_t thread_index;
unsigned long long int packets_captured;
unsigned long long int packets_processed;
unsigned long long int total_skipped_flows;
@@ -102,9 +104,9 @@ struct distributor_global_user_data
unsigned long long int shutdown_events;
unsigned long long int json_string_len_min;
unsigned long long int json_string_len_max;
double json_string_len_avg;
unsigned long long int json_message_len_min;
unsigned long long int json_message_len_max;
double json_message_len_avg;
unsigned long long int cur_active_flows;
unsigned long long int cur_idle_flows;
@@ -136,10 +138,10 @@ struct distributor_return_value
};
#define TC_INIT(initial, wanted) \
{ \
.mutex = PTHREAD_MUTEX_INITIALIZER, .condition = PTHREAD_COND_INITIALIZER, .value = initial, \
.wanted_value = wanted \
}
{.mutex = PTHREAD_MUTEX_INITIALIZER, \
.condition = PTHREAD_COND_INITIALIZER, \
.value = initial, \
.wanted_value = wanted}
struct thread_condition
{
pthread_mutex_t mutex;
@@ -176,7 +178,7 @@ static unsigned long long int nDPIsrvd_free_bytes = 0;
goto error; \
} while (0)
static void nDPIsrvd_memprof_log(char const * const format, ...)
void nDPIsrvd_memprof_log(char const * const format, ...)
{
#ifdef VERBOSE_MEMORY_PROFILING
int logbuf_used, logbuf_used_tmp;
@@ -551,7 +553,7 @@ static enum nDPIsrvd_callback_return distributor_json_callback(struct nDPIsrvd_s
struct distributor_flow_user_data * flow_stats = NULL;
#if 0
printf("Distributor: %.*s\n", (int)sock->buffer.json_string_length, sock->buffer.json_string);
printf("Distributor: %.*s\n", (int)sock->buffer.json_message_length, sock->buffer.json_message);
#endif
if (thread_data != NULL)
@@ -563,17 +565,18 @@ static enum nDPIsrvd_callback_return distributor_json_callback(struct nDPIsrvd_s
flow_stats = (struct distributor_flow_user_data *)flow->flow_user_data;
}
if (sock->buffer.json_string_length < global_stats->json_string_len_min)
if (sock->buffer.json_message_length < global_stats->json_message_len_min)
{
global_stats->json_string_len_min = sock->buffer.json_string_length;
global_stats->json_message_len_min = sock->buffer.json_message_length;
}
if (sock->buffer.json_string_length > global_stats->json_string_len_max)
if (sock->buffer.json_message_length > global_stats->json_message_len_max)
{
global_stats->json_string_len_max = sock->buffer.json_string_length;
global_stats->json_message_len_max = sock->buffer.json_message_length;
}
global_stats->json_string_len_avg = (global_stats->json_string_len_avg +
(global_stats->json_string_len_max + global_stats->json_string_len_min) / 2) /
2;
global_stats->json_message_len_avg =
(global_stats->json_message_len_avg +
(global_stats->json_message_len_max + global_stats->json_message_len_min) / 2) /
2;
global_stats->total_events_deserialized++;
@@ -910,7 +913,7 @@ static enum nDPIsrvd_callback_return distributor_json_printer(struct nDPIsrvd_so
}
printf("%0" NETWORK_BUFFER_LENGTH_DIGITS_STR "llu%.*s",
sock->buffer.json_string_length - NETWORK_BUFFER_LENGTH_DIGITS,
sock->buffer.json_message_length - NETWORK_BUFFER_LENGTH_DIGITS,
nDPIsrvd_json_buffer_length(sock),
nDPIsrvd_json_buffer_string(sock));
return CALLBACK_OK;
@@ -1008,10 +1011,10 @@ static void * distributor_client_mainloop_thread(void * const arg)
}
sock_stats = (struct distributor_global_user_data *)mock_sock->global_user_data;
sock_stats->json_string_len_min = (unsigned long long int)-1;
sock_stats->json_message_len_min = (unsigned long long int)-1;
sock_stats->options.do_hash_checks = 1;
buff_stats = (struct distributor_global_user_data *)mock_buff->global_user_data;
buff_stats->json_string_len_min = (unsigned long long int)-1;
buff_stats->json_message_len_min = (unsigned long long int)-1;
buff_stats->options.do_hash_checks = 0;
mock_null_shutdown_events = (int *)mock_null->global_user_data;
*mock_null_shutdown_events = 0;
@@ -1065,12 +1068,12 @@ static void * distributor_client_mainloop_thread(void * const arg)
{
logger(1, "JSON parsing failed: %s", nDPIsrvd_enum_to_string(parse_ret));
logger(1,
"Problematic JSON string (mock sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_sock->buffer.json_string_start,
mock_sock->buffer.json_string_length,
"Problematic JSON message (mock sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_sock->buffer.json_message_start,
mock_sock->buffer.json_message_length,
mock_sock->buffer.buf.used,
(int)mock_sock->buffer.json_string_length,
mock_sock->buffer.json_string);
(int)mock_sock->buffer.json_message_length,
mock_sock->buffer.json_message);
THREAD_ERROR_GOTO(trv);
}
@@ -1100,12 +1103,12 @@ static void * distributor_client_mainloop_thread(void * const arg)
{
logger(1, "JSON parsing failed: %s", nDPIsrvd_enum_to_string(parse_ret));
logger(1,
"Problematic JSON string (buff sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_buff->buffer.json_string_start,
mock_buff->buffer.json_string_length,
"Problematic JSON message (buff sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_buff->buffer.json_message_start,
mock_buff->buffer.json_message_length,
mock_buff->buffer.buf.used,
(int)mock_buff->buffer.json_string_length,
mock_buff->buffer.json_string);
(int)mock_buff->buffer.json_message_length,
mock_buff->buffer.json_message);
THREAD_ERROR_GOTO(trv);
}
@@ -1135,12 +1138,12 @@ static void * distributor_client_mainloop_thread(void * const arg)
{
logger(1, "JSON parsing failed: %s", nDPIsrvd_enum_to_string(parse_ret));
logger(1,
"Problematic JSON string (buff sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_null->buffer.json_string_start,
mock_null->buffer.json_string_length,
"Problematic JSON message (buff sock, start: %zu, length: %llu, buffer usage: %zu): %.*s",
mock_null->buffer.json_message_start,
mock_null->buffer.json_message_length,
mock_null->buffer.buf.used,
(int)mock_null->buffer.json_string_length,
mock_null->buffer.json_string);
(int)mock_null->buffer.json_message_length,
mock_null->buffer.json_message);
THREAD_ERROR_GOTO(trv);
}
}
@@ -1286,64 +1289,66 @@ static void * nDPId_mainloop_thread(void * const arg)
if (thread_block_signals() != 0)
{
logger(1, "nDPId block signals failed: %s", strerror(errno));
THREAD_ERROR_GOTO(trr);
}
if (setup_reader_threads() != 0)
{
THREAD_ERROR(trr);
goto error;
logger(1, "%s", "nDPId setup reader threads failed");
THREAD_ERROR_GOTO(trr);
}
/* Replace nDPId JSON socket fd with the one in our pipe and hope that no socket specific code-path triggered. */
reader_threads[0].collector_sockfd = mock_pipefds[PIPE_nDPId];
reader_threads[0].collector_sock_last_errno = 0;
if (set_collector_block(&reader_threads[0]) != 0)
reader_threads[nrv->thread_index].collector_sockfd = mock_pipefds[PIPE_nDPId];
reader_threads[nrv->thread_index].collector_sock_last_errno = 0;
if (set_collector_block(&reader_threads[nrv->thread_index]) != 0)
{
goto error;
THREAD_ERROR_GOTO(trr);
}
logger(0, "nDPId thread initialize done");
thread_signal(&start_condition);
thread_wait(&start_condition);
jsonize_daemon(&reader_threads[0], DAEMON_EVENT_INIT);
jsonize_daemon(&reader_threads[nrv->thread_index], DAEMON_EVENT_INIT);
/* restore SIGPIPE to the default handler (Termination) */
if (signal(SIGPIPE, SIG_DFL) == SIG_ERR)
{
goto error;
logger(1, "nDPId restore SIGPIPE to the default handler: %s", strerror(errno));
THREAD_ERROR_GOTO(trr);
}
run_pcap_loop(&reader_threads[0]);
run_capture_loop(&reader_threads[nrv->thread_index]);
process_remaining_flows();
for (size_t i = 0; i < nDPId_options.reader_thread_count; ++i)
{
nrv->packets_captured += reader_threads[i].workflow->packets_captured;
nrv->packets_processed += reader_threads[i].workflow->packets_processed;
nrv->total_skipped_flows += reader_threads[i].workflow->total_skipped_flows;
nrv->total_l4_payload_len += reader_threads[i].workflow->total_l4_payload_len;
nrv->not_detected_flow_protocols += reader_threads[i].workflow->total_not_detected_flows;
nrv->guessed_flow_protocols += reader_threads[i].workflow->total_guessed_flows;
nrv->detected_flow_protocols += reader_threads[i].workflow->total_detected_flows;
nrv->flow_detection_updates += reader_threads[i].workflow->total_flow_detection_updates;
nrv->flow_updates += reader_threads[i].workflow->total_flow_updates;
nrv->packets_captured += reader_threads[nrv->thread_index].workflow->packets_captured;
nrv->packets_processed += reader_threads[nrv->thread_index].workflow->packets_processed;
nrv->total_skipped_flows += reader_threads[nrv->thread_index].workflow->total_skipped_flows;
nrv->total_l4_payload_len += reader_threads[nrv->thread_index].workflow->total_l4_payload_len;
nrv->total_active_flows += reader_threads[i].workflow->total_active_flows;
nrv->total_idle_flows += reader_threads[i].workflow->total_idle_flows;
nrv->cur_active_flows += reader_threads[i].workflow->cur_active_flows;
nrv->cur_idle_flows += reader_threads[i].workflow->cur_idle_flows;
nrv->not_detected_flow_protocols += reader_threads[nrv->thread_index].workflow->total_not_detected_flows;
nrv->guessed_flow_protocols += reader_threads[nrv->thread_index].workflow->total_guessed_flows;
nrv->detected_flow_protocols += reader_threads[nrv->thread_index].workflow->total_detected_flows;
nrv->flow_detection_updates += reader_threads[nrv->thread_index].workflow->total_flow_detection_updates;
nrv->flow_updates += reader_threads[nrv->thread_index].workflow->total_flow_updates;
nrv->total_active_flows += reader_threads[nrv->thread_index].workflow->total_active_flows;
nrv->total_idle_flows += reader_threads[nrv->thread_index].workflow->total_idle_flows;
nrv->cur_active_flows += reader_threads[nrv->thread_index].workflow->cur_active_flows;
nrv->cur_idle_flows += reader_threads[nrv->thread_index].workflow->cur_idle_flows;
#ifdef ENABLE_ZLIB
nrv->total_compressions += reader_threads[i].workflow->total_compressions;
nrv->total_compression_diff += reader_threads[i].workflow->total_compression_diff;
nrv->current_compression_diff += reader_threads[i].workflow->current_compression_diff;
nrv->total_compressions += reader_threads[nrv->thread_index].workflow->total_compressions;
nrv->total_compression_diff += reader_threads[nrv->thread_index].workflow->total_compression_diff;
nrv->current_compression_diff += reader_threads[nrv->thread_index].workflow->current_compression_diff;
#endif
nrv->total_events_serialized += reader_threads[i].workflow->total_events_serialized;
}
nrv->total_events_serialized += reader_threads[nrv->thread_index].workflow->total_events_serialized;
error:
free_reader_threads();
thread_signal(&start_condition);
close(mock_pipefds[PIPE_nDPId]);
mock_pipefds[PIPE_nDPId] = -1;
logger(0, "%s", "nDPId worker thread exits..");
return NULL;
@@ -1351,10 +1356,10 @@ error:
static void usage(char const * const arg0)
{
fprintf(stderr, "usage: %s [path-to-pcap-file]\n", arg0);
fprintf(stderr, "usage: %s [path-to-pcap-file] [optional-nDPId-config-file]\n", arg0);
}
static int thread_wait_for_termination(pthread_t thread, time_t wait_time_secs, struct thread_return_value * const trv)
static int thread_wait_for_termination(pthread_t thread, time_t wait_time_secs, struct thread_return_value * trv)
{
#if !defined(__FreeBSD__) && !defined(__APPLE__)
struct timespec ts;
@@ -1524,6 +1529,8 @@ static int nio_selftest()
logger(0, "%s", "Using poll for nio");
#endif
int pipefds[2] = {-1, -1};
#ifdef ENABLE_EPOLL
if (nio_use_epoll(&io, 32) != NIO_SUCCESS)
#else
@@ -1534,7 +1541,6 @@ static int nio_selftest()
goto error;
}
int pipefds[2];
int rv = pipe(pipefds);
if (rv < 0)
{
@@ -1557,7 +1563,11 @@ static int nio_selftest()
char const wbuf[] = "AAAA";
size_t const wlen = strnlen(wbuf, sizeof(wbuf));
write(pipefds[1], wbuf, wlen);
if (write(pipefds[1], wbuf, wlen) < 0)
{
logger(1, "Write '%s' (%zu bytes) to pipe failed with: %s", wbuf, wlen, strerror(errno));
goto error;
}
if (nio_run(&io, 1000) != NIO_SUCCESS)
{
@@ -1623,9 +1633,17 @@ static int nio_selftest()
}
nio_free(&io);
close(pipefds[0]);
close(pipefds[1]);
return 0;
error:
nio_free(&io);
if (pipefds[0] >= 0) {
close(pipefds[0]);
}
if (pipefds[1] >= 0) {
close(pipefds[1]);
}
return 1;
}
@@ -1634,12 +1652,15 @@ error:
distributor_return.thread_return_value.val != 0)
int main(int argc, char ** argv)
{
if (argc != 1 && argc != 2)
if (argc != 1 && argc != 2 && argc != 3)
{
usage(argv[0]);
return 1;
}
ndpi_set_memory_alloction_functions(ndpi_malloc_wrapper, ndpi_free_wrapper, ndpi_calloc_wrapper,
ndpi_realloc_wrapper, NULL, NULL, NULL, NULL);
init_logging("nDPId-test");
log_app_info();
@@ -1658,31 +1679,69 @@ int main(int argc, char ** argv)
retval += base64_selftest();
retval += nio_selftest();
logger(1, "Selftest returned: %d", retval);
logger(1, "Selftest returned: %d%s", retval, (retval == 0 ? " (OK)" : ""));
return retval;
}
nDPIsrvd_options.max_write_buffers = 32;
nDPId_options.enable_data_analysis = 1;
nDPId_options.max_packets_per_flow_to_send = 5;
if (access(argv[1], R_OK) != 0)
{
logger(1, "%s: pcap file `%s' does not exist or is not readable", argv[0], argv[1]);
return 1;
}
set_config_defaults(&config_map[0], nDPIsrvd_ARRAY_LENGTH(config_map));
set_config_defaults(&general_config_map[0], nDPIsrvd_ARRAY_LENGTH(general_config_map));
set_config_defaults(&tuning_config_map[0], nDPIsrvd_ARRAY_LENGTH(tuning_config_map));
{
int ret;
if (argc == 3)
{
set_cmdarg_string(&nDPId_options.config_file, argv[2]);
}
if (IS_CMDARG_SET(nDPId_options.config_file) != 0 &&
(ret = parse_config_file(GET_CMDARG_STR(nDPId_options.config_file), nDPId_parsed_config_line, NULL)) != 0)
{
if (ret > 0)
{
logger(1, "Config file `%s' is malformed", GET_CMDARG_STR(nDPId_options.config_file));
}
else if (ret == -ENOENT)
{
logger(1, "Path `%s' is not a regular file", GET_CMDARG_STR(nDPId_options.config_file));
}
else
{
logger(1,
"Could not open file `%s' for reading: %s",
GET_CMDARG_STR(nDPId_options.config_file),
strerror(errno));
}
return 1;
}
}
set_cmdarg_ull(&nDPIsrvd_options.max_write_buffers, 32);
set_cmdarg_string(&nDPId_options.pcap_file_or_interface, argv[1]);
set_cmdarg_boolean(&nDPId_options.decode_tunnel, 1);
set_cmdarg_boolean(&nDPId_options.enable_data_analysis, 1);
set_cmdarg_ull(&nDPId_options.max_packets_per_flow_to_send, 5);
#ifdef ENABLE_ZLIB
/*
* zLib compression is forced enabled for testing.
* Remember to compile nDPId with zlib enabled.
* There will be diff's while running `test/run_tests.sh' otherwise.
*/
nDPId_options.enable_zlib_compression = 1;
set_cmdarg_boolean(&nDPId_options.enable_zlib_compression, 1);
#endif
nDPId_options.memory_profiling_log_interval = (unsigned long long int)-1;
nDPId_options.reader_thread_count = 1; /* Please do not change this! Generating meaningful pcap diff's relies on a
single reader thread! */
set_cmdarg(&nDPId_options.instance_alias, "nDPId-test");
if (access(argv[1], R_OK) != 0)
{
logger(1, "%s: pcap file `%s' does not exist or is not readable", argv[0], argv[1]);
return 1;
}
set_cmdarg(&nDPId_options.pcap_file_or_interface, argv[1]);
#ifdef ENABLE_MEMORY_PROFILING
set_cmdarg_ull(&nDPId_options.memory_profiling_log_interval, (unsigned long long int)-1);
#endif
set_cmdarg_ull(&nDPId_options.reader_thread_count, 1); /* Please do not change this! Generating meaningful pcap
diff's relies on a single reader thread! */
set_cmdarg_string(&nDPId_options.instance_alias, "nDPId-test");
if (validate_options() != 0)
{
return 1;
@@ -1699,13 +1758,19 @@ int main(int argc, char ** argv)
distributor_un_sockfd = -1;
distributor_in_sockfd = -1;
global_context = ndpi_global_init();
if (global_context == NULL)
{
logger_early(1, "Could not initialize libnDPI global context.");
}
if (setup_remote_descriptors(MAX_REMOTE_DESCRIPTORS) != 0)
{
return 1;
}
pthread_t nDPId_thread;
struct nDPId_return_value nDPId_return = {};
struct nDPId_return_value nDPId_return = {.thread_index = 0};
if (pthread_create(&nDPId_thread, NULL, nDPId_mainloop_thread, &nDPId_return) != 0)
{
return 1;
@@ -1751,6 +1816,13 @@ int main(int argc, char ** argv)
}
logger(0, "%s", "All worker threads terminated..");
free_reader_threads();
if (global_context != NULL)
{
ndpi_global_deinit(global_context);
}
global_context = NULL;
if (THREADS_RETURNED_ERROR() != 0)
{
@@ -1840,20 +1912,22 @@ int main(int argc, char ** argv)
"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n",
total_alloc_bytes -
sizeof(struct nDPId_workflow) *
nDPId_options.reader_thread_count /* We do not want to take the workflow into account. */,
GET_CMDARG_ULL(
nDPId_options.reader_thread_count) /* We do not want to take the workflow into account. */,
total_free_bytes -
sizeof(struct nDPId_workflow) *
nDPId_options.reader_thread_count /* We do not want to take the workflow into account. */,
GET_CMDARG_ULL(
nDPId_options.reader_thread_count) /* We do not want to take the workflow into account. */,
total_alloc_count,
total_free_count);
printf(
"~~ json string min len.......: %llu chars\n"
"~~ json string max len.......: %llu chars\n"
"~~ json string avg len.......: %llu chars\n",
distributor_return.stats.json_string_len_min,
distributor_return.stats.json_string_len_max,
(unsigned long long int)distributor_return.stats.json_string_len_avg);
"~~ json message min len.......: %llu chars\n"
"~~ json message max len.......: %llu chars\n"
"~~ json message avg len.......: %llu chars\n",
distributor_return.stats.json_message_len_min,
distributor_return.stats.json_message_len_max,
(unsigned long long int)distributor_return.stats.json_message_len_avg);
}
if (MT_GET_AND_ADD(ndpi_memory_alloc_bytes, 0) != MT_GET_AND_ADD(ndpi_memory_free_bytes, 0) ||
@@ -2031,9 +2105,9 @@ int main(int argc, char ** argv)
return 1;
}
if (nDPId_return.total_active_flows > distributor_return.stats.flow_detected_count +
distributor_return.stats.flow_guessed_count +
distributor_return.stats.flow_not_detected_count)
if (nDPId_return.total_active_flows != distributor_return.stats.flow_detected_count +
distributor_return.stats.flow_guessed_count +
distributor_return.stats.flow_not_detected_count)
{
logger(1,
"%s: Amount of total active flows not equal to the amount of received 'detected', 'guessed and "

3056
nDPId.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,6 @@
#if defined(__FreeBSD__) || defined(__APPLE__)
#include <sys/stat.h>
#endif
#include <arpa/inet.h>
#include <errno.h>
#include <fcntl.h>
@@ -90,27 +93,54 @@ static struct nDPIsrvd_address distributor_in_address = {
static struct
{
struct cmdarg config_file;
struct cmdarg pidfile;
struct cmdarg collector_un_sockpath;
struct cmdarg distributor_un_sockpath;
struct cmdarg distributor_in_address;
struct cmdarg user;
struct cmdarg group;
nDPIsrvd_ull max_remote_descriptors;
nDPIsrvd_ull max_write_buffers;
uint8_t bufferbloat_fallback_to_blocking;
struct cmdarg collector_group;
struct cmdarg distributor_group;
struct cmdarg max_remote_descriptors;
struct cmdarg max_write_buffers;
struct cmdarg bufferbloat_fallback_to_blocking;
#ifdef ENABLE_EPOLL
uint8_t use_poll;
struct cmdarg use_poll;
#endif
} nDPIsrvd_options = {.pidfile = CMDARG(nDPIsrvd_PIDFILE),
.collector_un_sockpath = CMDARG(COLLECTOR_UNIX_SOCKET),
.distributor_un_sockpath = CMDARG(DISTRIBUTOR_UNIX_SOCKET),
.distributor_in_address = CMDARG(NULL),
.user = CMDARG(DEFAULT_CHUSER),
.group = CMDARG(NULL),
.max_remote_descriptors = nDPIsrvd_MAX_REMOTE_DESCRIPTORS,
.max_write_buffers = nDPIsrvd_MAX_WRITE_BUFFERS,
.bufferbloat_fallback_to_blocking = 1};
} nDPIsrvd_options = {.config_file = CMDARG_STR(NULL),
.pidfile = CMDARG_STR(nDPIsrvd_PIDFILE),
.collector_un_sockpath = CMDARG_STR(COLLECTOR_UNIX_SOCKET),
.distributor_un_sockpath = CMDARG_STR(DISTRIBUTOR_UNIX_SOCKET),
.distributor_in_address = CMDARG_STR(NULL),
.user = CMDARG_STR(DEFAULT_CHUSER),
.group = CMDARG_STR(NULL),
.collector_group = CMDARG_STR(NULL),
.distributor_group = CMDARG_STR(NULL),
.max_remote_descriptors = CMDARG_ULL(nDPIsrvd_MAX_REMOTE_DESCRIPTORS),
.max_write_buffers = CMDARG_ULL(nDPIsrvd_MAX_WRITE_BUFFERS),
.bufferbloat_fallback_to_blocking = CMDARG_BOOL(1)
#ifdef ENABLE_EPOLL
,
.use_poll = CMDARG_BOOL(0)
#endif
};
struct confopt config_map[] = {CONFOPT("pidfile", &nDPIsrvd_options.pidfile),
CONFOPT("collector", &nDPIsrvd_options.collector_un_sockpath),
CONFOPT("distributor-unix", &nDPIsrvd_options.distributor_un_sockpath),
CONFOPT("distributor-in", &nDPIsrvd_options.distributor_in_address),
CONFOPT("user", &nDPIsrvd_options.user),
CONFOPT("group", &nDPIsrvd_options.group),
CONFOPT("collector-group", &nDPIsrvd_options.collector_group),
CONFOPT("distributor-group", &nDPIsrvd_options.distributor_group),
CONFOPT("max-remote-descriptors", &nDPIsrvd_options.max_remote_descriptors),
CONFOPT("max-write-buffers", &nDPIsrvd_options.max_write_buffers),
CONFOPT("blocking-io-fallback", &nDPIsrvd_options.bufferbloat_fallback_to_blocking)
#ifdef ENABLE_EPOLL
,
CONFOPT("poll", &nDPIsrvd_options.use_poll)
#endif
};
static void logger_nDPIsrvd(struct remote_desc const * const remote,
char const * const prefix,
@@ -128,7 +158,7 @@ static int drain_write_buffers_blocking(struct remote_desc * const remote);
static void nDPIsrvd_buffer_array_copy(void * dst, const void * src)
{
struct nDPIsrvd_write_buffer * const buf_dst = (struct nDPIsrvd_write_buffer *)dst;
struct nDPIsrvd_write_buffer const * const buf_src = (struct nDPIsrvd_write_buffer *)src;
struct nDPIsrvd_write_buffer const * const buf_src = (struct nDPIsrvd_write_buffer const *)src;
buf_dst->buf.ptr.raw = NULL;
if (nDPIsrvd_buffer_init(&buf_dst->buf, buf_src->buf.used) != 0)
@@ -229,7 +259,7 @@ static UT_array * get_additional_write_buffers(struct remote_desc * const remote
static int add_to_additional_write_buffers(struct remote_desc * const remote,
uint8_t * const buf,
nDPIsrvd_ull json_string_length)
nDPIsrvd_ull json_message_length)
{
struct nDPIsrvd_write_buffer buf_src = {};
UT_array * const additional_write_buffers = get_additional_write_buffers(remote);
@@ -239,9 +269,9 @@ static int add_to_additional_write_buffers(struct remote_desc * const remote,
return -1;
}
if (utarray_len(additional_write_buffers) >= nDPIsrvd_options.max_write_buffers)
if (utarray_len(additional_write_buffers) >= GET_CMDARG_ULL(nDPIsrvd_options.max_write_buffers))
{
if (nDPIsrvd_options.bufferbloat_fallback_to_blocking == 0)
if (GET_CMDARG_BOOL(nDPIsrvd_options.bufferbloat_fallback_to_blocking) == 0)
{
logger_nDPIsrvd(remote,
"Buffer limit for",
@@ -264,16 +294,16 @@ static int add_to_additional_write_buffers(struct remote_desc * const remote,
}
buf_src.buf.ptr.raw = buf;
buf_src.buf.used = buf_src.buf.max = json_string_length;
buf_src.buf.used = buf_src.buf.max = json_message_length;
utarray_push_back(additional_write_buffers, &buf_src);
return 0;
}
static void logger_nDPIsrvd(struct remote_desc const * const remote,
char const * const prefix,
char const * const format,
...)
static __attribute__((format(printf, 3, 4))) void logger_nDPIsrvd(struct remote_desc const * const remote,
char const * const prefix,
char const * const format,
...)
{
char logbuf[512];
va_list ap;
@@ -498,7 +528,7 @@ static int create_listen_sockets(void)
return 1;
}
if (is_cmdarg_set(&nDPIsrvd_options.distributor_in_address) != 0)
if (IS_CMDARG_SET(nDPIsrvd_options.distributor_in_address) != 0)
{
distributor_in_sockfd = socket(distributor_in_address.raw.sa_family, SOCK_STREAM, 0);
if (distributor_in_sockfd < 0 || set_fd_cloexec(distributor_in_sockfd) < 0)
@@ -528,7 +558,7 @@ static int create_listen_sockets(void)
int written = snprintf(collector_addr.sun_path,
sizeof(collector_addr.sun_path),
"%s",
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath));
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath));
if (written < 0)
{
logger(1, "snprintf failed: %s", strerror(errno));
@@ -536,10 +566,7 @@ static int create_listen_sockets(void)
}
else if (written == sizeof(collector_addr.sun_path))
{
logger(1,
"Collector UNIX socket path too long, current/max: %zu/%zu",
strlen(get_cmdarg(&nDPIsrvd_options.collector_un_sockpath)),
sizeof(collector_addr.sun_path) - 1);
logger(1, "Collector UNIX socket path too long, max: %zu characters", sizeof(collector_addr.sun_path) - 1);
return 1;
}
@@ -547,7 +574,7 @@ static int create_listen_sockets(void)
{
logger(1,
"Error binding Collector UNIX socket to `%s': %s",
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
strerror(errno));
return 1;
}
@@ -559,7 +586,7 @@ static int create_listen_sockets(void)
int written = snprintf(distributor_addr.sun_path,
sizeof(distributor_addr.sun_path),
"%s",
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath));
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath));
if (written < 0)
{
logger(1, "snprintf failed: %s", strerror(errno));
@@ -568,8 +595,7 @@ static int create_listen_sockets(void)
else if (written == sizeof(distributor_addr.sun_path))
{
logger(1,
"Distributor UNIX socket path too long, current/max: %zu/%zu",
strlen(get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath)),
"Distributor UNIX socket path too long, max: %zu characters",
sizeof(distributor_addr.sun_path) - 1);
return 2;
}
@@ -578,19 +604,19 @@ static int create_listen_sockets(void)
{
logger(1,
"Error binding Distributor socket to `%s': %s",
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
strerror(errno));
return 2;
}
}
if (is_cmdarg_set(&nDPIsrvd_options.distributor_in_address) != 0)
if (IS_CMDARG_SET(nDPIsrvd_options.distributor_in_address) != 0)
{
if (bind(distributor_in_sockfd, &distributor_in_address.raw, distributor_in_address.size) < 0)
{
logger(1,
"Error binding Distributor TCP/IP socket to %s: %s",
get_cmdarg(&nDPIsrvd_options.distributor_in_address),
GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address),
strerror(errno));
return 3;
}
@@ -598,7 +624,7 @@ static int create_listen_sockets(void)
{
logger(1,
"Error listening Distributor TCP/IP socket to %s: %s",
get_cmdarg(&nDPIsrvd_options.distributor_in_address),
GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address),
strerror(errno));
return 3;
}
@@ -606,7 +632,7 @@ static int create_listen_sockets(void)
{
logger(1,
"Error setting Distributor TCP/IP socket %s to non-blocking mode: %s",
get_cmdarg(&nDPIsrvd_options.distributor_in_address),
GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address),
strerror(errno));
return 3;
}
@@ -622,7 +648,7 @@ static int create_listen_sockets(void)
{
logger(1,
"Error setting Collector UNIX socket `%s' to non-blocking mode: %s",
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
strerror(errno));
return 3;
}
@@ -631,7 +657,7 @@ static int create_listen_sockets(void)
{
logger(1,
"Error setting Distributor UNIX socket `%s' to non-blocking mode: %s",
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
strerror(errno));
return 3;
}
@@ -804,10 +830,13 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
{
int opt;
while ((opt = getopt(argc, argv, "lL:c:dp:s:S:m:u:g:C:Dvh")) != -1)
while ((opt = getopt(argc, argv, "f:lL:c:dp:s:S:G:m:u:g:C:Dvh")) != -1)
{
switch (opt)
{
case 'f':
set_cmdarg_string(&nDPIsrvd_options.config_file, optarg);
break;
case 'l':
enable_console_logger();
break;
@@ -818,11 +847,11 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
}
break;
case 'c':
set_cmdarg(&nDPIsrvd_options.collector_un_sockpath, optarg);
set_cmdarg_string(&nDPIsrvd_options.collector_un_sockpath, optarg);
break;
case 'e':
#ifdef ENABLE_EPOLL
nDPIsrvd_options.use_poll = 1;
set_cmdarg_boolean(&nDPIsrvd_options.use_poll, 1);
#else
logger_early(1, "%s", "nDPIsrvd was built w/o epoll() support, poll() is already the default");
#endif
@@ -831,36 +860,67 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
daemonize_enable();
break;
case 'p':
set_cmdarg(&nDPIsrvd_options.pidfile, optarg);
set_cmdarg_string(&nDPIsrvd_options.pidfile, optarg);
break;
case 's':
set_cmdarg(&nDPIsrvd_options.distributor_un_sockpath, optarg);
set_cmdarg_string(&nDPIsrvd_options.distributor_un_sockpath, optarg);
break;
case 'S':
set_cmdarg(&nDPIsrvd_options.distributor_in_address, optarg);
set_cmdarg_string(&nDPIsrvd_options.distributor_in_address, optarg);
break;
case 'G':
{
char const * const sep = strchr(optarg, ':');
char group[256];
if (sep == NULL)
{
fprintf(stderr, "%s: Argument for `-G' is not in the format group:group\n", argv[0]);
return 1;
}
if (snprintf(group, sizeof(group), "%.*s", (int)(sep - optarg), optarg) > 0)
{
set_cmdarg_string(&nDPIsrvd_options.collector_group, group);
}
if (snprintf(group, sizeof(group), "%s", sep + 1) > 0)
{
set_cmdarg_string(&nDPIsrvd_options.distributor_group, group);
}
break;
}
case 'm':
if (str_value_to_ull(optarg, &nDPIsrvd_options.max_remote_descriptors) != CONVERSION_OK)
{
nDPIsrvd_ull tmp;
if (str_value_to_ull(optarg, &tmp) != CONVERSION_OK)
{
fprintf(stderr, "%s: Argument for `-C' is not a number: %s\n", argv[0], optarg);
return 1;
}
set_cmdarg_ull(&nDPIsrvd_options.max_remote_descriptors, tmp);
break;
}
case 'u':
set_cmdarg(&nDPIsrvd_options.user, optarg);
set_cmdarg_string(&nDPIsrvd_options.user, optarg);
break;
case 'g':
set_cmdarg(&nDPIsrvd_options.group, optarg);
set_cmdarg_string(&nDPIsrvd_options.group, optarg);
break;
case 'C':
if (str_value_to_ull(optarg, &nDPIsrvd_options.max_write_buffers) != CONVERSION_OK)
{
nDPIsrvd_ull tmp;
if (str_value_to_ull(optarg, &tmp) != CONVERSION_OK)
{
fprintf(stderr, "%s: Argument for `-C' is not a number: %s\n", argv[0], optarg);
return 1;
}
set_cmdarg_ull(&nDPIsrvd_options.max_write_buffers, tmp);
break;
}
case 'D':
nDPIsrvd_options.bufferbloat_fallback_to_blocking = 0;
set_cmdarg_boolean(&nDPIsrvd_options.bufferbloat_fallback_to_blocking, 0);
break;
case 'v':
fprintf(stderr, "%s", get_nDPId_version());
@@ -869,11 +929,14 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
default:
fprintf(stderr, "%s\n", get_nDPId_version());
fprintf(stderr,
"Usage: %s [-l] [-L logfile] [-c path-to-unix-sock] [-e] [-d] [-p pidfile]\n"
"Usage: %s [-f config-file] [-l] [-L logfile]\n"
"\t[-c path-to-unix-sock] [-e] [-d] [-p pidfile]\n"
"\t[-s path-to-distributor-unix-socket] [-S distributor-host:port]\n"
"\t[-G collector-unix-socket-group:distributor-unix-socket-group]\n"
"\t[-m max-remote-descriptors] [-u user] [-g group]\n"
"\t[-C max-buffered-json-lines] [-D]\n"
"\t[-v] [-h]\n\n"
"\t-f\tLoad nDPIsrvd options from a configuration file.\n"
"\t-l\tLog all messages to stderr.\n"
"\t-L\tLog all messages to a log file.\n"
"\t-c\tPath to a listening UNIX socket (nDPIsrvd Collector).\n"
@@ -892,40 +955,45 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
"\t-s\tPath to a listening UNIX socket (nDPIsrvd Distributor).\n"
"\t \tDefault: %s\n"
"\t-S\tAddress:Port of the listening TCP/IP socket (nDPIsrvd Distributor).\n"
"\t-G\tGroup owner of the UNIX collector/distributor socket.\n"
"\t \tDefault: Either the group set via `-g', otherwise the primary group of `-u'\n"
"\t-v\tversion\n"
"\t-h\tthis\n\n",
argv[0],
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath),
get_cmdarg(&nDPIsrvd_options.pidfile),
get_cmdarg(&nDPIsrvd_options.user),
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath));
nDPIsrvd_options.collector_un_sockpath.string.default_value,
nDPIsrvd_options.pidfile.string.default_value,
nDPIsrvd_options.user.string.default_value,
nDPIsrvd_options.distributor_un_sockpath.string.default_value);
return 1;
}
}
if (is_path_absolute("Pidfile", get_cmdarg(&nDPIsrvd_options.pidfile)) != 0)
set_config_defaults(&config_map[0], nDPIsrvd_ARRAY_LENGTH(config_map));
if (is_path_absolute("Pidfile", GET_CMDARG_STR(nDPIsrvd_options.pidfile)) != 0)
{
return 1;
}
if (is_path_absolute("Collector UNIX socket", get_cmdarg(&nDPIsrvd_options.collector_un_sockpath)) != 0)
if (is_path_absolute("Collector UNIX socket", GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath)) != 0)
{
return 1;
}
if (is_path_absolute("Distributor UNIX socket", get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath)) != 0)
if (is_path_absolute("Distributor UNIX socket", GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath)) != 0)
{
return 1;
}
if (is_cmdarg_set(&nDPIsrvd_options.distributor_in_address) != 0)
if (IS_CMDARG_SET(nDPIsrvd_options.distributor_in_address) != 0)
{
if (nDPIsrvd_setup_address(&distributor_in_address, get_cmdarg(&nDPIsrvd_options.distributor_in_address)) != 0)
if (nDPIsrvd_setup_address(&distributor_in_address, GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address)) !=
0)
{
logger_early(1,
"%s: Could not parse address %s",
argv[0],
get_cmdarg(&nDPIsrvd_options.distributor_in_address));
GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address));
return 1;
}
if (distributor_in_address.raw.sa_family == AF_UNIX)
@@ -933,8 +1001,8 @@ static int nDPIsrvd_parse_options(int argc, char ** argv)
logger_early(1,
"%s: You've requested to setup another UNIX socket `%s', but there is already one at `%s'",
argv[0],
get_cmdarg(&nDPIsrvd_options.distributor_in_address),
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath));
GET_CMDARG_STR(nDPIsrvd_options.distributor_in_address),
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath));
return 1;
}
}
@@ -1037,7 +1105,7 @@ static int new_connection(struct nio * const io, int eventfd)
current->event_collector_un.pid = ucred.pid;
#endif
logger_nDPIsrvd(current, "New collector connection from", "");
logger_nDPIsrvd(current, "New collector connection from", "%s", "");
break;
case DISTRIBUTOR_UN:
case DISTRIBUTOR_IN:
@@ -1108,7 +1176,7 @@ static int new_connection(struct nio * const io, int eventfd)
}
}
logger_nDPIsrvd(current, "New distributor connection from", "");
logger_nDPIsrvd(current, "New distributor connection from", "%s", "");
break;
}
@@ -1142,7 +1210,7 @@ static int new_connection(struct nio * const io, int eventfd)
static int handle_collector_protocol(struct nio * const io, struct remote_desc * const current)
{
struct nDPIsrvd_json_buffer * const json_read_buffer = get_read_buffer(current);
char * json_str_start = NULL;
char * json_msg_start = NULL;
if (json_read_buffer == NULL)
{
@@ -1160,28 +1228,28 @@ static int handle_collector_protocol(struct nio * const io, struct remote_desc *
}
errno = 0;
current->event_collector_un.json_bytes = strtoull(json_read_buffer->buf.ptr.text, &json_str_start, 10);
current->event_collector_un.json_bytes += json_str_start - json_read_buffer->buf.ptr.text;
current->event_collector_un.json_bytes = strtoull(json_read_buffer->buf.ptr.text, &json_msg_start, 10);
current->event_collector_un.json_bytes += json_msg_start - json_read_buffer->buf.ptr.text;
if (errno == ERANGE)
{
logger_nDPIsrvd(current, "BUG: Collector connection", "JSON string length exceeds numceric limits");
logger_nDPIsrvd(current, "BUG: Collector connection", "JSON message length exceeds numceric limits");
disconnect_client(io, current);
return 1;
}
if (json_str_start == json_read_buffer->buf.ptr.text)
if (json_msg_start == json_read_buffer->buf.ptr.text)
{
logger_nDPIsrvd(current,
"BUG: Collector connection",
"missing JSON string length in protocol preamble: \"%.*s\"",
"missing JSON message length in protocol preamble: \"%.*s\"",
NETWORK_BUFFER_LENGTH_DIGITS,
json_read_buffer->buf.ptr.text);
disconnect_client(io, current);
return 1;
}
if (json_str_start - json_read_buffer->buf.ptr.text != NETWORK_BUFFER_LENGTH_DIGITS)
if (json_msg_start - json_read_buffer->buf.ptr.text != NETWORK_BUFFER_LENGTH_DIGITS)
{
logger_nDPIsrvd(current,
"BUG: Collector connection",
@@ -1189,14 +1257,14 @@ static int handle_collector_protocol(struct nio * const io, struct remote_desc *
"%ld "
"bytes",
NETWORK_BUFFER_LENGTH_DIGITS,
(long int)(json_str_start - json_read_buffer->buf.ptr.text));
(long int)(json_msg_start - json_read_buffer->buf.ptr.text));
}
if (current->event_collector_un.json_bytes > json_read_buffer->buf.max)
{
logger_nDPIsrvd(current,
"BUG: Collector connection",
"JSON string too big: %llu > %zu",
"JSON message too big: %llu > %zu",
current->event_collector_un.json_bytes,
json_read_buffer->buf.max);
disconnect_client(io, current);
@@ -1213,7 +1281,7 @@ static int handle_collector_protocol(struct nio * const io, struct remote_desc *
{
logger_nDPIsrvd(current,
"BUG: Collector connection",
"invalid JSON string: %.*s...",
"invalid JSON message: %.*s...",
(int)current->event_collector_un.json_bytes > 512 ? 512
: (int)current->event_collector_un.json_bytes,
json_read_buffer->buf.ptr.text);
@@ -1244,7 +1312,7 @@ static int handle_incoming_data(struct nio * const io, struct remote_desc * cons
return 1;
}
/* read JSON strings (or parts) from the UNIX socket (collecting) */
/* read JSON messages (or parts) from the UNIX socket (collecting) */
if (json_read_buffer->buf.used == json_read_buffer->buf.max)
{
logger_nDPIsrvd(current,
@@ -1516,8 +1584,9 @@ static int mainloop(struct nio * const io)
static int setup_event_queue(struct nio * const io)
{
#ifdef ENABLE_EPOLL
if ((nDPIsrvd_options.use_poll == 0 && nio_use_epoll(io, 32) != NIO_SUCCESS) ||
(nDPIsrvd_options.use_poll != 0 && nio_use_poll(io, nDPIsrvd_MAX_REMOTE_DESCRIPTORS) != NIO_SUCCESS))
if ((GET_CMDARG_BOOL(nDPIsrvd_options.use_poll) == 0 && nio_use_epoll(io, 32) != NIO_SUCCESS) ||
(GET_CMDARG_BOOL(nDPIsrvd_options.use_poll) != 0 &&
nio_use_poll(io, nDPIsrvd_MAX_REMOTE_DESCRIPTORS) != NIO_SUCCESS))
#else
if (nio_use_poll(io, nDPIsrvd_MAX_REMOTE_DESCRIPTORS) != NIO_SUCCESS)
#endif
@@ -1576,6 +1645,49 @@ static int setup_remote_descriptors(nDPIsrvd_ull max_remote_descriptors)
return 0;
}
static int nDPIsrvd_parsed_config_line(
int lineno, char const * const section, char const * const name, char const * const value, void * const user_data)
{
(void)user_data;
if (strnlen(section, INI_MAX_SECTION) == nDPIsrvd_STRLEN_SZ("general") &&
strncmp(section, "general", INI_MAX_SECTION) == 0)
{
size_t i;
for (i = 0; i < nDPIsrvd_ARRAY_LENGTH(config_map); ++i)
{
if (strnlen(name, INI_MAX_NAME) == strnlen(config_map[i].key, INI_MAX_NAME) &&
strncmp(name, config_map[i].key, INI_MAX_NAME) == 0)
{
if (IS_CMDARG_SET(*config_map[i].opt) != 0)
{
logger_early(1, "General config key `%s' already set, ignoring value `%s'", name, value);
}
else
{
if (set_config_from(&config_map[i], value) != 0)
{
return 0;
}
}
break;
}
}
if (i == nDPIsrvd_ARRAY_LENGTH(config_map))
{
logger_early(1, "Invalid general config key `%s' at line %d", name, lineno);
}
}
else
{
logger_early(
1, "Invalid config section `%s' at line %d with key `%s' and value `%s'", section, lineno, name, value);
}
return 1;
}
#ifndef NO_MAIN
int main(int argc, char ** argv)
{
@@ -1594,6 +1706,32 @@ int main(int argc, char ** argv)
{
return 1;
}
{
int ret;
if (IS_CMDARG_SET(nDPIsrvd_options.config_file) != 0 &&
(ret =
parse_config_file(GET_CMDARG_STR(nDPIsrvd_options.config_file), nDPIsrvd_parsed_config_line, NULL)) !=
0)
{
if (ret > 0)
{
logger_early(1, "Config file `%s' is malformed", GET_CMDARG_STR(nDPIsrvd_options.config_file));
}
else if (ret == -ENOENT)
{
logger_early(1, "Path `%s' is not a regular file", GET_CMDARG_STR(nDPIsrvd_options.config_file));
}
else
{
logger_early(1,
"Could not open file `%s' for reading: %s",
GET_CMDARG_STR(nDPIsrvd_options.config_file),
strerror(errno));
}
return 1;
}
}
if (is_daemonize_enabled() != 0 && is_console_logger_enabled() != 0)
{
@@ -1604,32 +1742,32 @@ int main(int argc, char ** argv)
return 1;
}
if (access(get_cmdarg(&nDPIsrvd_options.collector_un_sockpath), F_OK) == 0)
if (access(GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath), F_OK) == 0)
{
logger_early(1,
"UNIX socket `%s' exists; nDPIsrvd already running? "
"Please remove the socket manually or change socket path.",
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath));
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath));
return 1;
}
if (access(get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath), F_OK) == 0)
if (access(GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath), F_OK) == 0)
{
logger_early(1,
"UNIX socket `%s' exists; nDPIsrvd already running? "
"Please remove the socket manually or change socket path.",
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath));
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath));
return 1;
}
log_app_info();
if (daemonize_with_pidfile(get_cmdarg(&nDPIsrvd_options.pidfile)) != 0)
if (daemonize_with_pidfile(GET_CMDARG_STR(nDPIsrvd_options.pidfile)) != 0)
{
goto error;
}
if (setup_remote_descriptors(nDPIsrvd_options.max_remote_descriptors) != 0)
if (setup_remote_descriptors(GET_CMDARG_ULL(nDPIsrvd_options.max_remote_descriptors)) != 0)
{
goto error;
}
@@ -1641,11 +1779,11 @@ int main(int argc, char ** argv)
case 1:
goto error;
case 2:
if (unlink(get_cmdarg(&nDPIsrvd_options.collector_un_sockpath)) != 0)
if (unlink(GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath)) != 0)
{
logger(1,
"Could not unlink `%s': %s",
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
strerror(errno));
}
goto error;
@@ -1655,8 +1793,8 @@ int main(int argc, char ** argv)
goto error;
}
logger(0, "collector UNIX socket listen on `%s'", get_cmdarg(&nDPIsrvd_options.collector_un_sockpath));
logger(0, "distributor UNIX listen on `%s'", get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath));
logger(0, "collector UNIX socket listen on `%s'", GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath));
logger(0, "distributor UNIX listen on `%s'", GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath));
switch (distributor_in_address.raw.sa_family)
{
default:
@@ -1672,36 +1810,98 @@ int main(int argc, char ** argv)
break;
}
errno = 0;
if (change_user_group(get_cmdarg(&nDPIsrvd_options.user),
get_cmdarg(&nDPIsrvd_options.group),
get_cmdarg(&nDPIsrvd_options.pidfile),
get_cmdarg(&nDPIsrvd_options.collector_un_sockpath),
get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath)) != 0 &&
errno != EPERM)
int ret = chmod_chown(GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
S_IRUSR | S_IWUSR | S_IWGRP,
GET_CMDARG_STR(nDPIsrvd_options.user),
IS_CMDARG_SET(nDPIsrvd_options.collector_group) != 0
? GET_CMDARG_STR(nDPIsrvd_options.collector_group)
: GET_CMDARG_STR(nDPIsrvd_options.group));
if (ret != 0)
{
if (errno != 0)
if (IS_CMDARG_SET(nDPIsrvd_options.collector_group) != 0 || IS_CMDARG_SET(nDPIsrvd_options.group) != 0)
{
logger(1,
"Change user/group to %s/%s failed: %s",
get_cmdarg(&nDPIsrvd_options.user),
(is_cmdarg_set(&nDPIsrvd_options.group) != 0 ? get_cmdarg(&nDPIsrvd_options.group) : "-"),
strerror(errno));
"Could not chmod/chown `%s' to user `%s' and group `%s': %s",
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.user),
IS_CMDARG_SET(nDPIsrvd_options.collector_group) != 0
? GET_CMDARG_STR(nDPIsrvd_options.collector_group)
: GET_CMDARG_STR(nDPIsrvd_options.group),
strerror(ret));
}
else
{
logger(1,
"Change user/group to %s/%s failed.",
get_cmdarg(&nDPIsrvd_options.user),
(is_cmdarg_set(&nDPIsrvd_options.group) != 0 ? get_cmdarg(&nDPIsrvd_options.group) : "-"));
"Could not chmod/chown `%s' to user `%s': %s",
GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.user),
strerror(ret));
}
if (ret != EPERM)
{
goto error_unlink_sockets;
}
}
ret = chmod_chown(GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
S_IRUSR | S_IWUSR | S_IWGRP,
GET_CMDARG_STR(nDPIsrvd_options.user),
IS_CMDARG_SET(nDPIsrvd_options.distributor_group) != 0
? GET_CMDARG_STR(nDPIsrvd_options.distributor_group)
: GET_CMDARG_STR(nDPIsrvd_options.group));
if (ret != 0)
{
if (IS_CMDARG_SET(nDPIsrvd_options.distributor_group) != 0 || IS_CMDARG_SET(nDPIsrvd_options.group) != 0)
{
logger(1,
"Could not chmod/chown `%s' to user `%s' and group `%s': %s",
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.user),
IS_CMDARG_SET(nDPIsrvd_options.distributor_group) != 0
? GET_CMDARG_STR(nDPIsrvd_options.distributor_group)
: GET_CMDARG_STR(nDPIsrvd_options.group),
strerror(ret));
}
else
{
logger(1,
"Could not chmod/chown `%s' to user `%s': %s",
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
GET_CMDARG_STR(nDPIsrvd_options.user),
strerror(ret));
}
if (ret != EPERM)
{
goto error_unlink_sockets;
}
}
ret = change_user_group(GET_CMDARG_STR(nDPIsrvd_options.user),
GET_CMDARG_STR(nDPIsrvd_options.group),
GET_CMDARG_STR(nDPIsrvd_options.pidfile));
if (ret != 0 && ret != -EPERM)
{
if (GET_CMDARG_STR(nDPIsrvd_options.group) != NULL)
{
logger(1,
"Change user/group to %s/%s failed: %s",
GET_CMDARG_STR(nDPIsrvd_options.user),
GET_CMDARG_STR(nDPIsrvd_options.group),
strerror(-ret));
}
else
{
logger(1, "Change user to %s failed: %s", GET_CMDARG_STR(nDPIsrvd_options.user), strerror(-ret));
}
goto error_unlink_sockets;
}
signal(SIGPIPE, SIG_IGN);
#if !defined(__FreeBSD__) && !defined(__APPLE__)
signal(SIGINT, SIG_IGN);
signal(SIGTERM, SIG_IGN);
signal(SIGQUIT, SIG_IGN);
#endif
if (setup_event_queue(&io) != 0)
{
@@ -1711,20 +1911,23 @@ int main(int argc, char ** argv)
retval = mainloop(&io);
error_unlink_sockets:
if (unlink(get_cmdarg(&nDPIsrvd_options.collector_un_sockpath)) != 0)
if (unlink(GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath)) != 0)
{
logger(1, "Could not unlink `%s': %s", get_cmdarg(&nDPIsrvd_options.collector_un_sockpath), strerror(errno));
logger(1, "Could not unlink `%s': %s", GET_CMDARG_STR(nDPIsrvd_options.collector_un_sockpath), strerror(errno));
}
if (unlink(get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath)) != 0)
if (unlink(GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath)) != 0)
{
logger(1, "Could not unlink `%s': %s", get_cmdarg(&nDPIsrvd_options.distributor_un_sockpath), strerror(errno));
logger(1,
"Could not unlink `%s': %s",
GET_CMDARG_STR(nDPIsrvd_options.distributor_un_sockpath),
strerror(errno));
}
error:
close(collector_un_sockfd);
close(distributor_un_sockfd);
close(distributor_in_sockfd);
daemonize_shutdown(get_cmdarg(&nDPIsrvd_options.pidfile));
daemonize_shutdown(GET_CMDARG_STR(nDPIsrvd_options.pidfile));
logger(0, "Bye.");
shutdown_logging();

207
ncrypt.c Normal file
View File

@@ -0,0 +1,207 @@
#include "ncrypt.h"
#include <endian.h>
#include <openssl/conf.h>
#include <openssl/core_names.h>
#include <openssl/err.h>
#include <openssl/pem.h>
#include <openssl/ssl.h>
#include <unistd.h>
int ncrypt_init(void)
{
SSL_load_error_strings();
OpenSSL_add_all_algorithms();
return NCRYPT_SUCCESS;
}
static int ncrypt_init_ctx(struct ncrypt_ctx * const ctx, SSL_METHOD const * const meth)
{
if (meth == NULL)
{
return NCRYPT_NULL_PTR;
}
if (ctx->ssl_ctx != NULL)
{
return NCRYPT_ALREADY_INITIALIZED;
}
ctx->ssl_ctx = SSL_CTX_new(meth);
if (ctx->ssl_ctx == NULL)
{
return NCRYPT_NOT_INITIALIZED;
}
SSL_CTX_set_min_proto_version(ctx->ssl_ctx, TLS1_3_VERSION);
SSL_CTX_set_max_proto_version(ctx->ssl_ctx, TLS1_3_VERSION);
SSL_CTX_set_ciphersuites(ctx->ssl_ctx, "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256");
return NCRYPT_SUCCESS;
}
static int ncrypt_load_pems(struct ncrypt_ctx * const ctx,
char const * const ca_path,
char const * const privkey_pem_path,
char const * const pubkey_pem_path)
{
if (SSL_CTX_use_certificate_file(ctx->ssl_ctx, pubkey_pem_path, SSL_FILETYPE_PEM) <= 0 ||
SSL_CTX_use_PrivateKey_file(ctx->ssl_ctx, privkey_pem_path, SSL_FILETYPE_PEM) <= 0 ||
SSL_CTX_load_verify_locations(ctx->ssl_ctx, ca_path, NULL) <= 0)
{
return NCRYPT_PEM_LOAD_FAILED;
}
SSL_CTX_set_verify(ctx->ssl_ctx, SSL_VERIFY_PEER | SSL_VERIFY_FAIL_IF_NO_PEER_CERT, NULL);
SSL_CTX_set_verify_depth(ctx->ssl_ctx, 4);
return NCRYPT_SUCCESS;
}
int ncrypt_init_client(struct ncrypt_ctx * const ctx,
char const * const ca_path,
char const * const privkey_pem_path,
char const * const pubkey_pem_path)
{
if (ca_path == NULL || privkey_pem_path == NULL || pubkey_pem_path == NULL)
{
return NCRYPT_NULL_PTR;
}
int rv = ncrypt_init_ctx(ctx, TLS_client_method());
if (rv != NCRYPT_SUCCESS)
{
return rv;
}
return ncrypt_load_pems(ctx, ca_path, privkey_pem_path, pubkey_pem_path);
}
int ncrypt_init_server(struct ncrypt_ctx * const ctx,
char const * const ca_path,
char const * const privkey_pem_path,
char const * const pubkey_pem_path)
{
if (ca_path == NULL || privkey_pem_path == NULL || pubkey_pem_path == NULL)
{
return NCRYPT_NULL_PTR;
}
int rv = ncrypt_init_ctx(ctx, TLS_server_method());
if (rv != NCRYPT_SUCCESS)
{
return rv;
}
return ncrypt_load_pems(ctx, ca_path, privkey_pem_path, pubkey_pem_path);
}
int ncrypt_on_connect(struct ncrypt_ctx * const ctx, int connect_fd, struct ncrypt_entity * const ent)
{
if (ent->ssl == NULL)
{
ent->ssl = SSL_new(ctx->ssl_ctx);
if (ent->ssl == NULL)
{
return NCRYPT_NOT_INITIALIZED;
}
SSL_set_fd(ent->ssl, connect_fd);
SSL_set_connect_state(ent->ssl);
}
int rv = SSL_do_handshake(ent->ssl);
if (rv != 1)
{
return SSL_get_error(ent->ssl, rv);
}
return NCRYPT_SUCCESS;
}
int ncrypt_on_accept(struct ncrypt_ctx * const ctx, int accept_fd, struct ncrypt_entity * const ent)
{
if (ent->ssl == NULL)
{
ent->ssl = SSL_new(ctx->ssl_ctx);
if (ent->ssl == NULL)
{
return NCRYPT_NOT_INITIALIZED;
}
SSL_set_fd(ent->ssl, accept_fd);
SSL_set_accept_state(ent->ssl);
}
int rv = SSL_accept(ent->ssl);
if (rv != 1)
{
return SSL_get_error(ent->ssl, rv);
}
return NCRYPT_SUCCESS;
}
ssize_t ncrypt_read(struct ncrypt_entity * const ent, char * const json_msg, size_t json_msg_len)
{
if (ent->ssl == NULL)
{
errno = EPROTO;
return -1;
}
int rv = SSL_read(ent->ssl, json_msg, json_msg_len);
if (rv <= 0)
{
int err = SSL_get_error(ent->ssl, rv);
if (err == SSL_ERROR_WANT_WRITE || err == SSL_ERROR_WANT_READ)
{
errno = EAGAIN;
}
else if (err != SSL_ERROR_SYSCALL)
{
errno = EPROTO;
}
return -1;
}
return rv;
}
ssize_t ncrypt_write(struct ncrypt_entity * const ent, char const * const json_msg, size_t json_msg_len)
{
if (ent->ssl == NULL)
{
errno = EPROTO;
return -1;
}
int rv = SSL_write(ent->ssl, json_msg, json_msg_len);
if (rv <= 0)
{
int err = SSL_get_error(ent->ssl, rv);
if (err == SSL_ERROR_WANT_WRITE || err == SSL_ERROR_WANT_READ)
{
errno = EAGAIN;
}
else if (err != SSL_ERROR_SYSCALL)
{
errno = EPROTO;
}
return -1;
}
return rv;
}
void ncrypt_free_entity(struct ncrypt_entity * const ent)
{
SSL_free(ent->ssl);
ent->ssl = NULL;
}
void ncrypt_free_ctx(struct ncrypt_ctx * const ctx)
{
SSL_CTX_free(ctx->ssl_ctx);
ctx->ssl_ctx = NULL;
EVP_cleanup();
}

73
ncrypt.h Normal file
View File

@@ -0,0 +1,73 @@
#ifndef NCRYPT_H
#define NCRYPT_H 1
#include <stdlib.h>
#define ncrypt_ctx(x) \
do \
{ \
(x)->ssl_ctx = NULL; \
} while (0);
#define ncrypt_entity(x) \
do \
{ \
(x)->ssl = NULL; \
(x)->handshake_done = 0; \
} while (0);
#define ncrypt_handshake_done(x) ((x)->handshake_done)
#define ncrypt_set_handshake(x) \
do \
{ \
(x)->handshake_done = 1; \
} while (0)
#define ncrypt_clear_handshake(x) \
do \
{ \
(x)->handshake_done = 0; \
} while (0)
enum
{
NCRYPT_SUCCESS = 0,
NCRYPT_NOT_INITIALIZED = -1,
NCRYPT_ALREADY_INITIALIZED = -2,
NCRYPT_NULL_PTR = -3,
NCRYPT_PEM_LOAD_FAILED = -4
};
struct ncrypt_ctx
{
void * ssl_ctx;
};
struct ncrypt_entity
{
void * ssl;
int handshake_done;
};
int ncrypt_init(void);
int ncrypt_init_client(struct ncrypt_ctx * const ctx,
char const * const ca_path,
char const * const privkey_pem_path,
char const * const pubkey_pem_path);
int ncrypt_init_server(struct ncrypt_ctx * const ctx,
char const * const ca_path,
char const * const privkey_pem_path,
char const * const pubkey_pem_path);
int ncrypt_on_connect(struct ncrypt_ctx * const ctx, int connect_fd, struct ncrypt_entity * const ent);
int ncrypt_on_accept(struct ncrypt_ctx * const ctx, int accept_fd, struct ncrypt_entity * const ent);
ssize_t ncrypt_read(struct ncrypt_entity * const ent, char * const json_msg, size_t json_msg_len);
ssize_t ncrypt_write(struct ncrypt_entity * const ent, char const * const json_msg, size_t json_msg_len);
void ncrypt_free_entity(struct ncrypt_entity * const ent);
void ncrypt_free_ctx(struct ncrypt_ctx * const ctx);
#endif

102
ndpid.conf.example Normal file
View File

@@ -0,0 +1,102 @@
[general]
# Set the network interface from which packets are captured and processed.
# Leave it empty to let nDPId choose the default network interface.
#netif = eth0
# Set a Berkeley Packet Filter.
# This will work for libpcap as well as with PF_RING.
#bpf = udp or tcp
# Decapsulate Layer4 tunnel protocols.
# Supported protocols: GRE
#decode-tunnel = true
#pidfile = /tmp/ndpid.pid
#user = nobody
#group = daemon
#riskdomains = /path/to/libnDPI/example/risky_domains.txt
#protocols = /path/to/libnDPI/example/protos.txt
#categories = /path/to/libnDPI/example/categories.txt
#ja4 = /path/to/libnDPI/example/ja4_fingerprints.csv
#sha1 = /path/to/libnDPI/example/sha1_fingerprints.csv
# Collector endpoint as UNIX socket (usually nDPIsrvd)
#collector = /run/nDPIsrvd/collector
# Collector endpoint as UDP socket (usually a custom application)
#collector = 127.0.0.1:7777
# Set a name for this nDPId instance
#alias = myhostname
# Set an optional UUID for this instance
# If the value starts with a '/' or '.', it is interpreted as a path
# from which the uuid is read from.
#uuid = 00000000-dead-c0de-0000-123456789abc
#uuid = ./path/to/some/file
#uuid = /proc/sys/kernel/random/uuid
#uuid = /sys/class/dmi/id/product_uuid
# Process only internal initial connections (src->dst)
#internal = true
# Process only external initial connections (dst->src)
#external = true
# Enable zLib compression of flow memory for long lasting flows
compression = true
# Enable "analyse" events, which can be used for machine learning.
# The daemon will generate some statistical values for every single flow.
# An "analyse" event is thrown after "max-packets-per-flow-to-analyse".
# Please note that the daemon will require a lot more heap memory for every flow.
#analysis = true
# Force poll() on systems that support epoll() as well
#poll = false
# Enable PF_RING packet capture instead of libpcap
#pfring = false
[tuning]
max-flows-per-thread = 2048
max-idle-flows-per-thread = 64
max-reader-threads = 10
daemon-status-interval = 600000000
#memory-profiling-log-interval = 5
compression-scan-interval = 20000000
compression-flow-inactivity = 30000000
flow-scan-interval = 10000000
generic-max-idle-time = 600000000
icmp-max-idle-time = 120000000
tcp-max-idle-time = 180000000
udp-max-idle-time = 7440000000
tcp-max-post-end-flow-time = 120000000
max-packets-per-flow-to-send = 15
max-packets-per-flow-to-process = 32
max-packets-per-flow-to-analyse = 32
error-event-threshold-n = 16
error-event-threshold-time = 10000000
# Please note that the following options are libnDPI related and can only be set via config file,
# not as commnand line parameter.
# See libnDPI/doc/configuration_parameters.md for detailed information.
[ndpi]
packets_limit_per_flow = 32
flow.direction_detection = enable
flow.track_payload = disable
tcp_ack_payload_heuristic = disable
fully_encrypted_heuristic = enable
libgcrypt.init = 1
dpi.compute_entropy = 1
fpc = disable
dpi.guess_on_giveup = 0x03
flow_risk_lists.load = 1
# Currently broken (upstream)
#flow_risk.crawler_bot.list.load = 1
log.level = 0
[protos]
tls.certificate_expiration_threshold = 7
tls.application_blocks_tracking = enable
stun.max_packets_extra_dissection = 8

31
ndpisrvd.conf.example Normal file
View File

@@ -0,0 +1,31 @@
[general]
#pidfile = /tmp/ndpisrvd.pid
#user = nobody
#group = nogroup
# Collector listener as UNIX socket
#collector = /run/nDPIsrvd/collector
# Distributor listener as UNIX socket
#distributor-unix = /run/nDPIsrvd/distributor
# Distributor listener as IP socket
#distributor-in = 127.0.0.1:7000
# Change group of the collector socket
#collector-group = daemon
# Change group of the distirbutor socket
#distirbutor-group = staff
# Max (distributor) clients allowed to connect to nDPIsrvd
max-remote-descriptors = 128
# Additional output buffers useful if a distributor sink speed unstable
max-write-buffers = 1024
# Fallback to blocking I/O if output buffers full
blocking-io-fallback = true
# Force poll() on systems that support epoll() as well
#poll = false

16
nio.h
View File

@@ -3,6 +3,8 @@
#include <poll.h>
#define WARN_UNUSED __attribute__((__warn_unused_result__))
enum
{
NIO_SUCCESS = 0,
@@ -34,41 +36,55 @@ struct nio
void nio_init(struct nio * io);
WARN_UNUSED
int nio_use_poll(struct nio * io, nfds_t max_fds);
WARN_UNUSED
int nio_use_epoll(struct nio * io, int max_events);
WARN_UNUSED
int nio_add_fd(struct nio * io, int fd, int event_flags, void * ptr);
WARN_UNUSED
int nio_mod_fd(struct nio * io, int fd, int event_flags, void * ptr);
WARN_UNUSED
int nio_del_fd(struct nio * io, int fd);
WARN_UNUSED
int nio_run(struct nio * io, int timeout);
WARN_UNUSED
static inline int nio_get_nready(struct nio const * const io)
{
return io->nready;
}
WARN_UNUSED
int nio_check(struct nio * io, int index, int events);
WARN_UNUSED
int nio_is_valid(struct nio const * const io, int index);
WARN_UNUSED
int nio_get_fd(struct nio * io, int index);
WARN_UNUSED
void * nio_get_ptr(struct nio * io, int index);
WARN_UNUSED
static inline int nio_has_input(struct nio * io, int index)
{
return nio_check(io, index, NIO_EVENT_INPUT);
}
WARN_UNUSED
static inline int nio_can_output(struct nio * io, int index)
{
return nio_check(io, index, NIO_EVENT_OUTPUT);
}
WARN_UNUSED
static inline int nio_has_error(struct nio * io, int index)
{
return nio_check(io, index, NIO_EVENT_ERROR);

149
npfring.c Normal file
View File

@@ -0,0 +1,149 @@
#include <pfring.h>
#include <sched.h>
#include "npfring.h"
#include "utils.h"
void npfring_print_version(FILE * const out)
{
uint32_t pfring_version;
pfring_version_noring(&pfring_version);
fprintf(out,
"PF_RING version: %d.%d.%d\n",
(pfring_version & 0xFFFF0000) >> 16,
(pfring_version & 0x0000FF00) >> 8,
(pfring_version & 0x000000FF));
}
int npfring_init(char const * device_name, uint32_t caplen, struct npfring * result)
{
pfring * pd = pfring_open(device_name, caplen, PF_RING_REENTRANT | PF_RING_PROMISC);
if (pd == NULL)
{
return -1;
}
pfring_set_application_name(pd, "nDPId");
logger_early(0, "PF_RING RX channels: %d", pfring_get_num_rx_channels(pd));
result->pfring_desc = pd;
int rc;
if ((rc = pfring_set_socket_mode(pd, recv_only_mode)) != 0)
{
logger_early(1, "pfring_set_sock_moode returned: %d", rc);
return -1;
}
return 0;
}
void npfring_close(struct npfring * npf)
{
if (npf->pfring_desc != NULL)
{
pfring_close(npf->pfring_desc);
npf->pfring_desc = NULL;
}
}
int npfring_set_bpf(struct npfring * npf, char const * bpf_filter)
{
char buf[BUFSIZ];
if (npf->pfring_desc == NULL)
{
return -1;
}
// pfring_set_bpf_filter expects a char*
snprintf(buf, sizeof(buf), "%s", bpf_filter);
return pfring_set_bpf_filter(npf->pfring_desc, buf);
}
int npfring_datalink(struct npfring * npf)
{
if (npf->pfring_desc != NULL)
{
return pfring_get_link_type(npf->pfring_desc);
}
return -1;
}
int npfring_enable(struct npfring * npf)
{
if (npf->pfring_desc == NULL)
{
return -1;
}
return pfring_enable_ring(npf->pfring_desc);
}
int npfring_get_selectable_fd(struct npfring * npf)
{
if (npf->pfring_desc == NULL)
{
return -1;
}
return pfring_get_selectable_fd(npf->pfring_desc);
}
int npfring_recv(struct npfring * npf, struct pcap_pkthdr * pcap_hdr)
{
int rc;
if (npf->pfring_desc == NULL || pcap_hdr == NULL)
{
return -1;
}
unsigned char * buf = &npf->pfring_buffer[0];
struct pfring_pkthdr pfring_pkthdr;
rc = pfring_recv(npf->pfring_desc, &buf, PFRING_BUFFER_SIZE, &pfring_pkthdr, 0);
if (rc > 0)
{
pcap_hdr->ts = pfring_pkthdr.ts;
pcap_hdr->caplen = pfring_pkthdr.caplen;
pcap_hdr->len = pfring_pkthdr.len;
}
else
{
pcap_hdr->ts.tv_sec = 0;
pcap_hdr->ts.tv_usec = 0;
pcap_hdr->caplen = 0;
pcap_hdr->len = 0;
}
return rc;
}
int npfring_stats(struct npfring * npf, struct npfring_stats * stats)
{
int rc;
if (npf->pfring_desc == NULL)
{
return -1;
}
pfring_stat pstats;
rc = pfring_stats(npf->pfring_desc, &pstats);
if (rc == 0)
{
stats->recv = pstats.recv;
stats->drop = pstats.drop;
stats->shunt = pstats.shunt;
}
else
{
stats->drop = 0;
stats->recv = 0;
stats->shunt = 0;
}
return rc;
}

41
npfring.h Normal file
View File

@@ -0,0 +1,41 @@
#ifndef PFRING_H
#define PFRING_H 1
#include <pcap/pcap.h>
#include <stdint.h>
#include <stdio.h>
#include "config.h"
struct npfring
{
void * pfring_desc;
uint8_t pfring_buffer[PFRING_BUFFER_SIZE];
};
struct npfring_stats
{
uint64_t recv;
uint64_t drop;
uint64_t shunt;
};
void npfring_print_version(FILE * const out);
int npfring_init(char const * device_name, uint32_t caplen, struct npfring * result);
void npfring_close(struct npfring * npf);
int npfring_set_bpf(struct npfring * npf, char const * bpf_filter);
int npfring_datalink(struct npfring * npf);
int npfring_enable(struct npfring * npf);
int npfring_get_selectable_fd(struct npfring * npf);
int npfring_recv(struct npfring * npf, struct pcap_pkthdr * pf_hdr);
int npfring_stats(struct npfring * npf, struct npfring_stats * stats);
#endif

11
packages/debian/postrm Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/sh
if [ "$1" = "remove" -o "$1" = "purge" ]; then
rm -rf /run/nDPId /run/nDPIsrvd
if [ "$1" = "purge" ]; then
deluser ndpid || true
deluser ndpisrvd || true
delgroup ndpisrvd-distributor || true
fi
fi

17
packages/debian/preinst Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/sh
addgroup --system ndpisrvd-distributor
adduser --system --no-create-home --shell=/bin/false --group ndpisrvd
adduser --system --no-create-home --shell=/bin/false --group ndpid
cat <<EOF
****************************************************************************
* The user whom may want to access DPI data needs access to: *
* /run/nDPIsrvd/distributor *
* *
* To make it accessible to [USER], type: *
* sudo usermod --append --groups ndpisrvd-distributor [USER] *
* *
* Please note that you might need to re-login to make changes take effect. *
****************************************************************************
EOF

5
packages/debian/prerm Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/sh
if [ "$1" = "remove" -o "$1" = "purge" ]; then
systemctl stop ndpisrvd.service
fi

View File

@@ -1,18 +1,18 @@
From a9e21c707e5edaf1db14b3dd78d5cc397ebc624c Mon Sep 17 00:00:00 2001
From dd4b50f34008f636e31e0a0c6c05df6791767231 Mon Sep 17 00:00:00 2001
From: Toni Uhlig <matzeton@googlemail.com>
Date: Mon, 12 Jun 2023 19:54:31 +0200
Date: Mon, 9 Dec 2024 16:26:09 +0100
Subject: [PATCH] Allow in-source builds required for OpenWrt toolchain.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
---
CMakeLists.txt | 7 -------
1 file changed, 7 deletions(-)
CMakeLists.txt | 8 --------
1 file changed, 8 deletions(-)
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 43f8f31b..b88f0f0c 100644
index 14a0ec829..5d7d45073 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -10,13 +10,6 @@ if(CMAKE_COMPILER_IS_GNUCXX)
@@ -10,14 +10,6 @@ if(CMAKE_COMPILER_IS_GNUCXX)
endif(CMAKE_COMPILER_IS_GNUCXX)
set(CMAKE_C_STANDARD 11)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c11 -D_DEFAULT_SOURCE=1 -D_GNU_SOURCE=1")
@@ -21,11 +21,12 @@ index 43f8f31b..b88f0f0c 100644
- "Please remove ${PROJECT_SOURCE_DIR}/CMakeCache.txt\n"
- "and\n"
- "${PROJECT_SOURCE_DIR}/CMakeFiles\n"
- "Create a build directory somewhere and run CMake again.")
- "Create a build directory somewhere and run CMake again.\n"
- "Or run: 'cmake -S ${PROJECT_SOURCE_DIR} -B ./your-custom-build-dir [CMAKE-OPTIONS]'")
-endif()
set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake)
find_package(PkgConfig REQUIRED)
--
2.30.2
2.39.5

View File

@@ -2,7 +2,7 @@ include $(TOPDIR)/rules.mk
PKG_NAME:=nDPId-testing
PKG_VERSION:=1.0
PKG_RELEASE:=$(AUTORELEASE)
PKG_RELEASE:=1
ifneq ($(wildcard /artifacts),)
PKG_DIRECTORY:=/artifacts
@@ -18,6 +18,7 @@ PKG_LICENSE_FILES:=COPYING
CMAKE_INSTALL:=1
include $(INCLUDE_DIR)/kernel.mk
include $(INCLUDE_DIR)/package.mk
include $(INCLUDE_DIR)/cmake.mk
@@ -25,13 +26,13 @@ define Package/nDPId-testing
TITLE:=nDPId is a tiny nDPI based daemons / toolkit (nDPId source repository)
SECTION:=net
CATEGORY:=Network
DEPENDS:=@!SMALL_FLASH @!LOW_MEMORY_FOOTPRINT +libpcap +zlib +LIBNDPI_GCRYPT:libgcrypt
DEPENDS:=@!SMALL_FLASH @!LOW_MEMORY_FOOTPRINT +libpcap +zlib +LIBNDPI_GCRYPT:libgcrypt +NDPID_TESTING_INFLUXDB:libcurl +NDPID_TESTING_PFRING:libpfring
URL:=http://github.com/lnslbrty/nDPId
endef
define Package/nDPId-testing/description
nDPId is a set of daemons and tools to capture, process and classify network flows.
It's only dependencies (besides a half-way modern c library and POSIX threads) are libnDPI (>= 3.6.0 or current github dev branch) and libpcap.
It's only dependencies (besides a half-way modern c library and POSIX threads) are libnDPI and libpcap.
endef
define Package/nDPId-testing/config
@@ -51,6 +52,21 @@ config NDPID_TESTING_LIBNDPI_COMMIT_HASH
Set the desired libnDPI git commit hash you want to link nDPId against.
Leave empty to use the dev branch.
Disabled by default.
config NDPID_TESTING_INFLUXDB
bool "nDPIsrvd-influxdb"
depends on PACKAGE_nDPId-testing
default n
help
An InfluxDB push daemon. It aggregates various statistics gathered from nDPId.
The results are sent to a specified InfluxDB endpoint.
config NDPID_TESTING_PFRING
bool "PF_RING support"
depends on PACKAGE_nDPId-testing
default n
help
Enable PF_RING support for faster packet capture.
endef
CMAKE_OPTIONS += -DBUILD_EXAMPLES=ON
@@ -65,6 +81,18 @@ CMAKE_OPTIONS += -DSTATIC_LIBNDPI_INSTALLDIR="$(PKG_BUILD_DIR)/libnDPI/install"
TARGET_CFLAGS += -DLIBNDPI_STATIC=1
TARGET_CFLAGS += -Werror
ifneq ($(CONFIG_NDPID_TESTING_PFRING),)
# FIXME: PFRING kernel include directory is hardcoded (not installed to linux header directory).
CMAKE_OPTIONS += -DENABLE_PFRING=ON \
-DPFRING_KERNEL_INC="$(KERNEL_BUILD_DIR)/PF_RING-8.4.0/kernel" \
-DPFRING_INSTALLDIR="$(STAGING_DIR)/usr" \
-DPFRING_LINK_STATIC=OFF
endif
ifneq ($(CONFIG_NDPID_TESTING_INFLUXDB),)
CMAKE_OPTIONS += -DENABLE_CURL=ON
endif
ifneq ($(CONFIG_LIBNDPI_GCRYPT),)
CMAKE_OPTIONS += -DNDPI_WIDTH_GCRYPT=ON
endif
@@ -105,6 +133,8 @@ endef
endif
define Build/Prepare
@rm -f '$(DL_DIR)/$(PKG_SOURCE)'
@rm -rf '$(PKG_BUILD_DIR)/*'
@echo 'tar: $(DL_DIR)/$(PKG_SOURCE)'
@echo 'pwd: $(shell pwd)'
@echo 'PKG_DIRECTORY=$(PKG_DIRECTORY)'
@@ -126,7 +156,6 @@ define Package/nDPId-testing/install
$(INSTALL_BIN) $(PKG_INSTALL_DIR)/usr/bin/nDPIsrvd-analysed $(1)/usr/bin/nDPIsrvd-testing-analysed
$(INSTALL_BIN) $(PKG_INSTALL_DIR)/usr/bin/nDPIsrvd-captured $(1)/usr/bin/nDPIsrvd-testing-captured
$(INSTALL_BIN) $(PKG_INSTALL_DIR)/usr/bin/nDPIsrvd-collectd $(1)/usr/bin/nDPIsrvd-testing-collectd
$(INSTALL_BIN) $(PKG_INSTALL_DIR)/usr/bin/nDPIsrvd-json-dump $(1)/usr/bin/nDPIsrvd-testing-json-dump
$(INSTALL_DIR) $(1)/etc/init.d/
$(INSTALL_BIN) $(PKG_NAME).init $(1)/etc/init.d/$(PKG_NAME)

View File

@@ -33,7 +33,7 @@ config nDPId
#option udp_connect '127.0.0.1:31337'
#option proto_file ''
#option cat_file ''
#option ja3_file ''
#option ja4_file ''
#option ssl_file ''
#option alias ''
#option analysis 0

View File

@@ -64,12 +64,14 @@ start_ndpid_instance() {
fi
args="$(print_arg_str "$cfg" 'interface' '-i')"
args="$args$(print_arg_bool "$cfg" 'use_pfring' '-r')"
args="$args$(print_arg_bool "$cfg" 'internal_only' '-I')"
args="$args$(print_arg_bool "$cfg" 'external_only' '-E')"
args="$args$(print_arg_str "$cfg" 'bpf_filter' '-B')"
args="$args$(print_arg_bool "$cfg" 'use_poll' '-e')"
args="$args$(print_arg_str "$cfg" 'proto_file' '-P')"
args="$args$(print_arg_str "$cfg" 'cat_file' '-C')"
args="$args$(print_arg_str "$cfg" 'ja3_file' '-J')"
args="$args$(print_arg_str "$cfg" 'ja4_file' '-J')"
args="$args$(print_arg_str "$cfg" 'ssl_file' '-S')"
args="$args$(print_arg_str "$cfg" 'alias' '-a')"
args="$args$(print_arg_bool "$cfg" 'analysis' '-A')"
@@ -116,7 +118,7 @@ validate_ndpid_section() {
'udp_connect:string' \
'proto_file:string' \
'cat_file:string' \
'ja3_file:string' \
'ja4_file:string' \
'ssl_file:string' \
'alias:string' \
'analysis:bool:0' \

View File

@@ -0,0 +1,8 @@
#!/bin/sh
if [ $1 == 0 ]; then
rm -rf /run/nDPId /run/nDPIsrvd
userdel ndpid || true
userdel ndpisrvd || true
groupdel ndpisrvd-distributor || true
fi

View File

@@ -0,0 +1,19 @@
#!/bin/sh
if [ $1 == 1 ]; then
groupadd --system ndpisrvd-distributor
adduser --system --no-create-home --shell=/bin/false --user-group ndpisrvd
adduser --system --no-create-home --shell=/bin/false --user-group ndpid
cat <<EOF
****************************************************************************
* The user whom may want to access DPI data needs access to: *
* /run/nDPIsrvd/distributor *
* *
* To make it accessible to [USER], type: *
* sudo usermod --append --groups ndpisrvd-distributor [USER] *
* *
* Please note that you might need to re-login to make changes take effect. *
****************************************************************************
EOF
fi

View File

@@ -0,0 +1,5 @@
#!/bin/sh
if [ $1 == 0 ]; then
systemctl stop ndpisrvd.service
fi

View File

@@ -1,2 +0,0 @@
COLLECTOR_PATH=/var/run/ndpisrvd-collector
NDPID_ARGS="-A -z"

View File

@@ -5,10 +5,10 @@ Requires=ndpisrvd.service
[Service]
Type=simple
ExecStart=@CMAKE_INSTALL_PREFIX@/sbin/nDPId $NDPID_ARGS -i %i -c ${COLLECTOR_PATH}
ExecStartPre=/bin/sh -c 'test -r "@CMAKE_INSTALL_PREFIX@/etc/nDPId/%i.conf" || cp -v "@CMAKE_INSTALL_PREFIX@/share/nDPId/ndpid.conf.example" "@CMAKE_INSTALL_PREFIX@/etc/nDPId/%i.conf"'
ExecStart=@CMAKE_INSTALL_PREFIX@/sbin/nDPId -f @CMAKE_INSTALL_PREFIX@/etc/nDPId/%i.conf -i %i -u ndpid -c /run/nDPIsrvd/collector
Restart=on-failure
Environment=COLLECTOR_PATH=/var/run/ndpisrvd-collector NDPID_ARGS="-A -z"
EnvironmentFile=@CMAKE_INSTALL_PREFIX@/etc/default/ndpid
Environment="NDPID_STARTED_BY_SYSTEMD="
[Install]
WantedBy=multi-user.target

View File

@@ -4,11 +4,12 @@ After=network.target
[Service]
Type=simple
ExecStart=@CMAKE_INSTALL_PREFIX@/bin/nDPIsrvd -c ${COLLECTOR_PATH}
ExecStopPost=/bin/rm -f /var/run/ndpisrvd-collector
ExecStartPre=/bin/sh -c 'test -r "@CMAKE_INSTALL_PREFIX@/etc/nDPId/nDPIsrvd.conf" || cp -v "@CMAKE_INSTALL_PREFIX@/share/nDPId/ndpisrvd.conf.example" "@CMAKE_INSTALL_PREFIX@/etc/nDPId/nDPIsrvd.conf"'
ExecStartPre=/bin/sh -c 'mkdir -p /run/nDPIsrvd && chown root:root /run/nDPIsrvd && chmod 0775 /run/nDPIsrvd'
ExecStopPost=/bin/sh -c 'rm -f /run/nDPIsrvd/collector /run/nDPIsrvd/distributor'
ExecStart=@CMAKE_INSTALL_PREFIX@/bin/nDPIsrvd -f @CMAKE_INSTALL_PREFIX@/etc/nDPId/nDPIsrvd.conf -u ndpisrvd -c /run/nDPIsrvd/collector -s /run/nDPIsrvd/distributor -G ndpid:ndpisrvd-distributor
Restart=on-failure
Environment=COLLECTOR_PATH=/var/run/ndpisrvd-collector
EnvironmentFile=@CMAKE_INSTALL_PREFIX@/etc/default/ndpid
Environment="NDPID_STARTED_BY_SYSTEMD="
[Install]
WantedBy=multi-user.target

View File

@@ -1,5 +1,5 @@
# schema
All schema's placed in here are nDPId exclusive, meaning that they are not necessarily representing a "real-world" JSON string received by e.g. `./example/py-json-stdout`.
All schema's placed in here are nDPId exclusive, meaning that they are not necessarily representing a "real-world" JSON message received by e.g. `./example/py-json-stdout`.
This is due to the fact that libnDPI itself add's some JSON information to the serializer of which we have no control over.
IMHO it makes no sense to include stuff here that is part of libnDPI.

View File

@@ -9,7 +9,9 @@
"daemon_event_name",
"global_ts_usec",
"version",
"ndpi_version"
"ndpi_version",
"ndpi_api_version",
"size_per_flow"
],
"if": {
"properties": { "daemon_event_name": { "enum": [ "init", "reconnect" ] } }
@@ -21,12 +23,15 @@
"properties": { "daemon_event_name": { "enum": [ "status", "shutdown" ] } }
},
"then": {
"required": [ "packets-captured", "packets-processed", "total-skipped-flows", "total-l4-payload-len", "total-not-detected-flows", "total-guessed-flows", "total-detected-flows", "total-detection-updates", "total-updates", "current-active-flows", "total-active-flows", "total-idle-flows", "total-compressions", "total-compression-diff", "current-compression-diff", "total-events-serialized" ]
"required": [ "packets-captured", "packets-processed", "pfring_active", "pfring_recv", "pfring_drop", "pfring_shunt", "total-skipped-flows", "total-l4-payload-len", "total-not-detected-flows", "total-guessed-flows", "total-detected-flows", "total-detection-updates", "total-updates", "current-active-flows", "total-active-flows", "total-idle-flows", "total-compressions", "total-compression-diff", "current-compression-diff", "global-alloc-bytes", "global-alloc-count", "global-free-bytes", "global-free-count", "total-events-serialized" ]
},
"properties": {
"alias": {
"type": "string"
},
"uuid": {
"type": "string"
},
"source": {
"type": "string"
},
@@ -60,6 +65,15 @@
"ndpi_version": {
"type": "string"
},
"ndpi_api_version": {
"type": "number",
"minimum": 0
},
"size_per_flow": {
"type": "number",
"minimum": 1384,
"maximum": 1500
},
"max-flows-per-thread": {
"type": "number"
@@ -103,6 +117,21 @@
"type": "number",
"minimum": 0
},
"pfring_active": {
"type": "boolean"
},
"pfring_recv": {
"type": "number",
"minimum": 0
},
"pfring_drop": {
"type": "number",
"minimum": 0
},
"pfring_shunt": {
"type": "number",
"minimum": 0
},
"total-skipped-flows": {
"type": "number",
"minimum": 0
@@ -155,6 +184,22 @@
"type": "number",
"minimum": 0
},
"global-alloc-bytes": {
"type": "number",
"minimum": 0
},
"global-alloc-count": {
"type": "number",
"minimum": 0
},
"global-free-bytes": {
"type": "number",
"minimum": 0
},
"global-free-count": {
"type": "number",
"minimum": 0
},
"total-events-serialized": {
"type": "number",
"minimum": 1

View File

@@ -74,6 +74,9 @@
"alias": {
"type": "string"
},
"uuid": {
"type": "string"
},
"source": {
"type": "string"
},
@@ -99,11 +102,10 @@
"Unknown packet type",
"Packet header invalid",
"IP4 packet too short",
"Packet smaller than IP4 header",
"nDPI IPv4/L4 payload detection failed",
"IP6 packet too short",
"Packet smaller than IP6 header",
"nDPI IPv6/L4 payload detection failed",
"Tunnel decoding failed",
"TCP packet smaller than expected",
"UDP packet smaller than expected",
"Captured packet size is smaller than expected packet size",

File diff suppressed because it is too large Load Diff

View File

@@ -27,13 +27,16 @@
"required": [ "thread_id", "flow_id", "flow_packet_id", "flow_src_last_pkt_time", "flow_dst_last_pkt_time", "flow_idle_time" ]
},
"else": {
"not": { "required": [ "thread_id", "flow_id", "flow_packet_id", "flow_src_last_pkt_time", "flow_dst_last_pkt_time", "flow_idle_time" ] }
"not": { "required": [ "thread_id", "vlan_id", "flow_id", "flow_packet_id", "flow_src_last_pkt_time", "flow_dst_last_pkt_time", "flow_idle_time" ] }
},
"properties": {
"alias": {
"type": "string"
},
"uuid": {
"type": "string"
},
"source": {
"type": "string"
},
@@ -57,6 +60,11 @@
"packet-flow"
]
},
"vlan_id": {
"type": "number",
"minimum": 0,
"maximum": 4095
},
"flow_id": {
"type": "number",
"minimum": 1

20
scripts/build-sonarcloud.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -e
set -x
cd "$(dirname "${0}")/.."
BUILD_DIR=./build-sonarcloud
NUMBER_OF_PROCESSORS=$(nproc --all)
mkdir "${BUILD_DIR}"
cmake -S . -B "${BUILD_DIR}" \
-DCMAKE_EXPORT_COMPILE_COMMANDS=ON \
-DENABLE_COVERAGE=ON \
-DBUILD_NDPI=ON \
-DBUILD_EXAMPLES=ON \
-DENABLE_CURL=ON \
-DENABLE_ZLIB=ON \
-DNDPI_WITH_GCRYPT=OFF
cmake --build "${BUILD_DIR}" -j ${NUMBER_OF_PROCESSORS} \
--config Release

70
scripts/gen-cacerts.sh Executable file
View File

@@ -0,0 +1,70 @@
#!/usr/bin/env bash
printf 'usage: %s [out-dir] [client-cname] [server-cname]\n' "${0}"
if [ -z "${1}" ]; then
OUT_DIR="$(dirname ${0})/pki"
else
OUT_DIR="${1}"
fi
if [ -z "${2}" ]; then
CLIENT_CN="unknown"
else
CLIENT_CN="${2}"
fi
if [ -z "${3}" ]; then
SERVER_CN="unknown"
else
SERVER_CN="${3}"
fi
printf 'PKI Directory: %s\n' "${OUT_DIR}"
printf 'Client CName.: %s\n' "${CLIENT_CN}"
printf 'Server CName.: %s\n' "${SERVER_CN}"
set -e
set -x
OLDPWD="$(pwd)"
mkdir -p "${OUT_DIR}"
cd "${OUT_DIR}"
if [ ! -r ./ca.key -o ! -r ./ca.crt ]; then
printf '%s\n' '[*] Create CA...'
openssl genrsa -out ./ca.key 4096
openssl req -x509 -new -nodes -key ./ca.key -sha256 -days 3650 -out ./ca.crt -subj "/CN=nDPId Root CA"
fi
if [ ! -r ./server_${SERVER_CN}.key -o ! -r ./server_${SERVER_CN}.crt ]; then
printf '[*] Create Server Cert: %s\n' "${SERVER_CN}"
openssl genrsa -out ./server_${SERVER_CN}.key 2048
openssl req -new -key ./server_${SERVER_CN}.key -out ./server_${SERVER_CN}.csr -subj "/CN=${SERVER_CN}"
openssl x509 -req -in ./server_${SERVER_CN}.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial \
-out ./server_${SERVER_CN}.crt -days 825 -sha256
fi
if [ ! -r ./client_${CLIENT_CN}.key -o ! -r ./client_${CLIENT_CN}.crt ]; then
printf '[*] Create Client Cert: %s\n' "${CLIENT_CN}"
openssl genrsa -out ./client_${CLIENT_CN}.key 2048
openssl req -new -key ./client_${CLIENT_CN}.key -out ./client_${CLIENT_CN}.csr -subj "/CN=${CLIENT_CN}"
openssl x509 -req -in ./client_${CLIENT_CN}.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial \
-out ./client_${CLIENT_CN}.crt -days 825 -sha256
fi
printf '%s\n' '[*] Done'
cd "${OLDPWD}"
set +x
printf '%s\n' 'To test the certs you may use OpenSSL and start a client/server with:'
printf 'openssl s_server -accept %s -cert %s -key %s -CAfile %s -Verify 1 -verify_return_error -tls1_3\n' \
"7777" \
"${OUT_DIR}/server_${SERVER_CN}.crt" "${OUT_DIR}/server_${SERVER_CN}.key" \
"${OUT_DIR}/ca.crt"
printf 'openssl s_client -connect 127.0.0.1:%s -cert %s -key %s -CAfile %s -verify_return_error -tls1_3\n' \
"7777" \
"${OUT_DIR}/client_${CLIENT_CN}.crt" "${OUT_DIR}/client_${CLIENT_CN}.key" \
"${OUT_DIR}/ca.crt"

View File

@@ -6,6 +6,7 @@ GIT_EXEC="$(command -v git || printf '%s' "")"
WGET_EXEC="$(command -v wget || printf '%s' "")"
UNZIP_EXEC="$(command -v unzip || printf '%s' "")"
MAKE_EXEC="$(command -v make || printf '%s' "")"
FLOCK_EXEC="$(command -v flock || printf '%s' "")"
if [ -z "${NDPI_COMMIT_HASH}" ]; then
NDPI_COMMIT_HASH="dev"
@@ -14,15 +15,15 @@ else
GITHUB_FALLBACK_URL="https://github.com/ntop/nDPI/archive/${NDPI_COMMIT_HASH}.zip"
fi
if [ -z "${GIT_EXEC}" -o -z "${WGET_EXEC}" -o -z "${UNZIP_EXEC}" -o -z "${MAKE_EXEC}" ]; then
printf '%s\n' "Required Executables missing: git, wget, unzip, make" >&2
if [ -z "${GIT_EXEC}" -o -z "${WGET_EXEC}" -o -z "${UNZIP_EXEC}" -o -z "${MAKE_EXEC}" -o -z "${FLOCK_EXEC}" ]; then
printf '%s\n' "Required Executables missing: git, wget, unzip, make, flock" >&2
exit 1
fi
LOCKFILE="$(realpath "${0}").lock"
touch "${LOCKFILE}"
exec 42< "${LOCKFILE}"
flock -x -n 42 || {
${FLOCK_EXEC} -x -n 42 || {
printf '%s\n' "Could not aquire file lock for ${0}. Already running instance?" >&2;
exit 1;
}
@@ -31,11 +32,7 @@ if [ ! -z "${CC}" ]; then
HOST_TRIPLET="$(${CC} ${CFLAGS} -dumpmachine)"
fi
if [ ! -z "${MAKEFLAGS}" ]; then
case "$(uname -s)" in
Linux*) MAKEFLAGS="-${MAKEFLAGS}" ;;
esac
fi
MAKEFLAGS="-${MAKEFLAGS}"
cat <<EOF
------ environment variables ------
@@ -45,6 +42,8 @@ CXX=${CXX:-}
AR=${AR:-}
RANLIB=${RANLIB:-}
PKG_CONFIG=${PKG_CONFIG:-}
NDPI_CFLAGS=${NDPI_CFLAGS:-}
NDPI_LDFLAGS=${NDPI_LDFLAGS:-}
CFLAGS=${CFLAGS:-}
LDFLAGS=${LDFLAGS:-}
ADDITIONAL_ARGS=${ADDITIONAL_ARGS:-}
@@ -102,7 +101,8 @@ test ! -r Makefile || { make distclean || true; }
DEST_INSTALL="${DEST_INSTALL:-$(realpath ./install)}"
MAKE_PROGRAM="${MAKE_PROGRAM:-make -j4}"
HOST_ARG="--host=${HOST_TRIPLET}"
./autogen.sh --enable-option-checking=fatal \
./autogen.sh
./configure --enable-option-checking=fatal \
--prefix="/" \
--with-only-libndpi ${HOST_ARG} ${ADDITIONAL_ARGS} || { cat config.log | grep -v '^|'; false; }
${MAKE_PROGRAM} ${MAKEFLAGS} install DESTDIR="${DEST_INSTALL}"

Some files were not shown because too many files have changed in this diff Show More