210 Commits

Author SHA1 Message Date
lns
9f2bf9fdc3 Removed TLS proxy capabilities as it complicates the code and makes no sense.
* nDPIsrvd-cached will do the same job in the future

Signed-off-by: lns <matzeton@googlemail.com>
2023-01-17 22:03:03 +01:00
Toni Uhlig
ac4c7390a3 Added TLS proxy support.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-17 22:03:00 +01:00
Toni Uhlig
5e313f43f9 Small CI/CD/nDPIsrvd.py improvements.
* Updated examples/js-rt-analyzer and examples/js-rt-analyzer-frontend

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-17 21:01:47 +01:00
Toni Uhlig
a3d20c17d1 Improved collectd risk processing to be in sync with libnDPI risks.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-11 06:28:10 +01:00
Toni Uhlig
c0717c7e6c Gitlab-CI: Upload coverage report.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-11 06:14:44 +01:00
Toni Uhlig
470ed99eaf Added https://gitlab.com/verzulli/ndpid-rt-analyzer-frontend.git example.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-11 06:13:51 +01:00
Toni Uhlig
ac3757a367 Merge branch 'main' of github.com:utoni/nDPId 2023-01-10 10:13:57 +01:00
Toni Uhlig
07efb1efd4 Added distclean-libnDPI target to CMake.
* Gitlab-CI: Additional job for debian packages
 * Install Python examples iff BUILD_EXAMPLES=ON

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-10 01:03:34 +01:00
Macauley Cheng
afe873c0de Delete docker-compose.yml 2023-01-09 21:13:53 +01:00
macauley_cheng
3dcc13b052 add Docker related file 2023-01-09 21:13:53 +01:00
Toni Uhlig
464450486b bump libnDPI to a944514ddec73f79704f55aab1423e39f4ce7a03
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-09 11:24:40 +01:00
Toni Uhlig
655393e953 nDPid: Fixed base64encode bug which lead to invalid base64 strings.
* py-semantic-validation: Decode base64 raw packet data as well
 * nDPIsrvd.py: Added PACKETS_PLEN_MAX
 * nDPIsrvd.py: Improved JSON parse error/exception handling

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2023-01-09 01:43:24 +01:00
Toni Uhlig
e9443d7618 Fix libnDPI build script.
* added ntop Webinar 2022 reference

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-26 19:35:12 +01:00
Toni Uhlig
4e19ab929c py-machine-learning / sklearn-random-forest: Quality Of Life improvments
* fixed libnDPI submodule build on some platforms

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-22 22:13:08 +01:00
Toni Uhlig
c5930e3510 Add collectd statistics diff test.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-06 19:51:53 +01:00
Toni Uhlig
d21a38cf02 Limit the size of base64 serialized raw packet data (8192 bytes per packet).
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-06 12:52:52 +01:00
Toni Uhlig
ced5f5d4b4 py-flow-info: ignore certain json lines that match various criteria
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-03 01:23:26 +01:00
Toni Uhlig
60741d5649 Strace support for diff tests.
* tiny README update

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-03 01:21:20 +01:00
Toni Uhlig
8b81b170d3 Updated Github/Gitlab CI
* instrument Clang's thread sanitizer for tests

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-02 23:24:06 +01:00
Toni Uhlig
2c95b31210 nDPId-test: Reworked I/O handling to prevent some endless loop scenarios. Fixed a race condition in the memory wrapper as well.
* nDPId: Instead of sending too long JSON strings, log an error and some parts.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-12-02 22:11:57 +01:00
Toni Uhlig
532961af33 Fixed MD format issues.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-21 11:34:10 +01:00
Toni Uhlig
64f6abfdbe Unified nDPId/nDPIsrvd command line argument storage.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-21 11:26:05 +01:00
Toni Uhlig
77ee336cc9 Added Network Buffer Size CI Check.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-20 22:42:06 +01:00
Toni Uhlig
9b78939096 Updated README's.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-20 22:25:18 +01:00
Toni Uhlig
57c5d8532b Test for diff's in flow-analyse CSV generator daemon.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-16 23:06:37 +01:00
Toni Uhlig
869d4de271 Improved make daemon / daemon.sh to accept nDPId / nDPIsrvd arguments via env.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-15 07:05:29 +01:00
Toni Uhlig
ce567ae5b7 Improved the point of time when to append the raw packet base64 data to the serializer.
* nDPId-test: Increased the max-packets-per-flow-to-send from 3 to 5.
   This is quite useful for TCP as the first 3 packets are usually part of the three-way-handshake.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-15 06:25:16 +01:00
Toni Uhlig
36e428fc89 Sync unit tests.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-13 16:19:07 +01:00
Toni Uhlig
ea1698504c nDPIsrvd: Provide workaround for change user/group.
* nDPId/nDPIsrvd/c-examples: Parameter parsing needs to be improved
                              if `strdup()` in combination with static strings is used.
 * Other non-critical fixes.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-13 11:21:55 +01:00
Toni Uhlig
bc346a28f4 nDPId: Fixed base64 encoding issue.
* The issue can result in an error message like:
   `Base64 encoding failed with: Buffer too small.`
   and also in too big JSON strings generated by nDPId
   which nDPIsrvd does not like as it's length is
   greater than `NETWORK_BUFFER_MAX_SIZE`.
 * nDPId will now obey `NETWORK_BUFFER_MAX_SIZE` while
   trying to base64 encode raw packet data.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-13 09:26:04 +01:00
Toni Uhlig
e629dd59cd nDPIsrvd.h: Provide two additional convenient API functions.
* nDPIsrvd_json_buffer_string
 * nDPIsrvd_json_buffer_length

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-06 13:19:29 +01:00
Toni Uhlig
7515c8aeec Experimental systemd support.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-06 12:58:55 +01:00
Toni Uhlig
25f4ef74ac Improved examples.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-11-02 00:01:57 +01:00
Toni Uhlig
d55e397929 bump libnDPI to db9f6ec1b4018164e5bff05f115dc60711bb711b
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-31 13:53:02 +01:00
Toni Uhlig
d3f99f21e6 Create pidfile iff daemon mode enabled.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-31 12:45:49 +01:00
Toni Uhlig
c63cbec26d Improved nDPIsrvd-collectd statistics.
* Improved RRD-Graph generation script and static WWW html files.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-31 12:45:15 +01:00
Toni Uhlig
805aef5de8 Increased network buffer size to 33792 bytes.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-30 22:13:07 +01:00
Toni Uhlig
2d14509f04 nDPid-test: add buffer test
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-30 20:12:17 +01:00
Toni Uhlig
916d2df6ea nDPId-test: Fixed thread sync/lock issue.
* rarely happens in CI

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-22 01:45:14 +02:00
Toni Uhlig
46c8fc5219 Merge branch 'main' of github.com:utoni/nDPId 2022-10-20 16:13:27 +02:00
Toni Uhlig
e5f4af4890 Special Thanks to Damiano Verzulli (@verzulli).
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-20 16:12:40 +02:00
lns
cd22d56056 Add ArchLinux PKGBUILD.
Signed-off-by: lns <matzeton@googlemail.com>
2022-10-19 18:40:52 +02:00
Toni Uhlig
49352698a0 nDPId: Added error event threshold to prevent event spamming which may be abused.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-17 06:36:30 +02:00
Toni Uhlig
6292102f93 py-machine-learning: load and save trained models
* added link to a pre-trained model

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-15 11:59:39 +02:00
Toni Uhlig
80f8448834 Removed discontinued examples from the ReadMe.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-13 16:47:03 +02:00
Toni Uhlig
9bf4f31418 Removed example py-ja3-checker.
* renamed sklearn-ml.py to sklearn-random-forest.py (there is more to come!)
 * force all protocol classes to lower case

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-13 00:12:22 +02:00
Toni Uhlig
4069816d69 Improved py-machine-learning example.
* colorize/prettify output
 * added sklearn controls/tuning options
 * disable IAT/Packet-Length features as default

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-11 20:20:01 +02:00
Toni Uhlig
bb633bde22 daemon.sh: fixed race condition
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-10 17:54:49 +02:00
Toni Uhlig
20fc74f527 Improved py-machine-learning example.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-10 16:44:12 +02:00
Toni Uhlig
2ede930eec daemon.sh: cat nDPId / nDPIsrvd log on failure
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-09 19:11:37 +02:00
Toni Uhlig
4654faf381 Improved py-machine-learning example.
* c-analysed: fixed quoting bug
 * nDPId: fixed invalid iat storing/serialisation
 * nDPId: free data analysis after event was sent

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
Signed-off-by: lns <matzeton@googlemail.com>
2022-10-09 18:31:45 +02:00
Toni Uhlig
b7a17d62c7 Improved OpenWrt UCI/Initscript
* c-analysed: chuser()/chgroup()

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-06 06:54:01 +02:00
Toni Uhlig
ac46f3841f Fixed heap overflow on shutdown caused by missing remotes size/used reset.
* introduced with 22a8d04c74

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-05 00:14:46 +02:00
Toni Uhlig
be3f466373 OpenWrt UCI/Initscript
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-03 15:57:16 +02:00
lns
b7d8564b65 Generate code coverage w/o external shell script, use CMake.
* upload codecov/dist artifacts

Signed-off-by: lns <matzeton@googlemail.com>
2022-10-03 15:45:17 +02:00
lns
49ea4f8474 Small fixes.
Signed-off-by: lns <matzeton@googlemail.com>
2022-10-01 22:37:25 +02:00
Toni Uhlig
b6060b897e c-analysed: improved feature extraction from "analyse" events
* c-captured: update detected risks on "detection-update" events
 * c-collectd: added missing flow breed
 * c-collectd: PUTVAL macros are more flexible now

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-10-01 18:01:56 +02:00
Toni Uhlig
14f6b87551 Added nDPIsrvd-analysed to generate CSV files from analyse events.
* nDPIsrvd.h: iterate over JSON arrays
 * nDPId: calculate l3 payload packet entropies for analysis

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-30 19:28:49 +02:00
Toni Uhlig
74f71643da nDPId-test: Force collector blocking mode.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-25 16:24:05 +02:00
Toni Uhlig
2103ee0811 Refactored client distributor C API.
* Still not perfect, but the code before was not even able to deal with JSON arrays.
   Use common "speaking" function names for all functions in nDPIsrvd.h
 * Provide a more or less generic and easy extendable JSON walk function.
 * Modified C examples to align with the changed C API.
 * c-collectd: Reduced lot's of code duplication by providing mapping tables.
 * nDPId: IAT array requires one slot less (first packet has always an IAT of 0).

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-25 00:54:39 +02:00
Toni Uhlig
36f1786bde nDPIsrvd.h: Fixed bug during token parsing/hashing. Do not hash array contents.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-23 00:13:19 +02:00
Toni Uhlig
9a28475bba Improved flown analyse event:
* store packet directions
 * merged direction based IATs
 * merged direction based PKTLENs

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-22 19:07:08 +02:00
Toni Uhlig
28971cd764 flow-info.py: Command line arguments --no-color, --no-statusbar (both useful for tests/CI) and --print-analyse-results.
* run_tests.sh: Use flow-info.py for additional DIFF tests.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-22 08:00:21 +02:00
Toni Uhlig
3c7bd6a4ba Merge branch 'main' of github.com:utoni/nDPId 2022-09-19 19:39:54 +02:00
Toni Uhlig
08f263e409 nDPId: Reduced flow-updates for TCP flows to 1/4 of the timeout value.
* nDPId: Fixed broken validation tests.
 * nDPId: Removed TICK_RESOLUTION, not required anymore.
 * c-collectd: Improved total layer4 payload calculation/update handling.
 * c-collectd: Updated RRD Graph script according to total layer4 payload changes.
 * py-flow-info.py: Fixed several bugs and syntax errors.
 * Python scripts: Added dirname(argv[0]) as search path for nDPIsrvd.py.
 * nDPIsrvd&nDPId-test: Fixed missing EPOLLERR check.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-19 19:39:49 +02:00
Damiano Verzulli
ab7f7d05f3 Improve README
- link to already-existing JSON-schemas have been added
- a graphical schema detailing flow-events timeline have
  been added in both PNG and source-Drawio formats.
  Link to PNG have been included in the README
2022-09-19 17:23:11 +02:00
Toni Uhlig
015a739efd Added layer4 payload length bins.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-19 10:14:37 +02:00
Toni Uhlig
31715295d9 bump libnDPI to 174cd739dbb1358ab012c4779e42e0221bef835c
* ReadMe stuff
 * OpenWrt Makefile: set NEED_LINKING_AGAINST_LIBM=ON

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-18 13:34:43 +02:00
Toni Uhlig
06bce24c0e Add -Werror to OpenWrt package TARGET_CFLAGS.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-17 18:53:17 +02:00
Toni Uhlig
efaa76e978 Provide thread sync via locking on architectures that do not support Compare&Swap.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-17 18:27:17 +02:00
Toni Uhlig
b3e9af495c Add OpenWrt CI via Github Actions.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-17 10:31:26 +02:00
lns
b8cfe1d6d3 Fixed last pkt time.
Signed-off-by: lns <matzeton@googlemail.com>
2022-09-14 11:22:41 +02:00
Toni Uhlig
d4633c1192 New flow event: 'analysis'.
* The goal was to provide a separate event for extracted feature that are not required
   and only useful for a few (e.g. someone who wants do ML).
 * Increased network buffer size to 32kB (8192 * 4).
 * Switched timestamp precision from ms to us for *ALL* timestamps.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-13 22:05:08 +02:00
Toni Uhlig
aca1615dc1 OpenWrt packaging support.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-13 13:53:48 +02:00
Toni Uhlig
94aa02b298 nDPIsrvd-collectd: Stdout should be unbuffered.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-12 13:32:50 +02:00
Toni Uhlig
20ced3e636 nDPIsrvd-collectd: RRD Graph generation script and a basic static HTML5 website for viewing the generated image files.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-12 13:23:50 +02:00
Toni Uhlig
83409e5b79 Use CMake XCompile and collect host-triplet from ${CC}.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-11 16:50:56 +02:00
Toni Uhlig
3bc6627dcc nDPId: Removed thread_id nonsense as it does not provide any useful information and is not portable at all, not even on Linux systems ..
* nDPId: Removed blocking I/O warning, which causes logspams..

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-10 23:11:03 +02:00
Toni Uhlig
7594180301 include fix
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-10 22:55:06 +02:00
Toni Uhlig
a992c79ab6 Fixed compilation warnings on linux32 platforms.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-10 22:45:12 +02:00
Toni Uhlig
6fe5d1da69 Do not use pthread_t as numeric value. Some systems define pthread_t as struct *
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-10 22:34:00 +02:00
Toni Uhlig
38c71af2f4 nDPIsrvd: Fixed NUL pointer deref during logging attempt.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-10 16:46:25 +02:00
Toni Uhlig
ac2e5ed796 CI: fix minimum supported libnDPI version
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-06 16:56:35 +02:00
Toni Uhlig
f9bd7d29ce Bump libnDPI to 37f918322c0a489b5143a987c8f1a44a6f78a6f3 and updated flow json schema file.
* export env vars AR / CMAKE_C_COMPILER_AR and RANLIB / CMAKE_C_COMPILER_RANLIB while building libnDPI
 * nDPId check API version during startup (macro vs. function call) and print a warning if they are different

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-06 14:50:46 +02:00
Toni Uhlig
c5c7d83c97 Added https://gitlab.com/verzulli/ndpid-rt-analyzer.git to examples.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-06 11:08:01 +02:00
Toni Uhlig
70f517b040 Merge branch 'main' of github.com:utoni/nDPId 2022-09-04 17:26:21 +02:00
Toni Uhlig
dcf78ad3ed Disable timestamp generation in nDPIsrvd-collectd as default.
* collectd's rrdtool write plugin does silently fail with those ones (dunno why)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-09-04 17:24:10 +02:00
lns
d646ec5ab4 nDPId: Fixed fcntl() issue; invalid fcntl() set after a blocking-write.
* nDPId: imrpvoed collector socket error messages on connect/write/etc failures
 * reverted `netcat` parts of the README

Signed-off-by: lns <matzeton@googlemail.com>
2022-08-29 15:29:07 +02:00
lns
dea30501a4 Add documentation about events and flow states.
Signed-off-by: lns <matzeton@googlemail.com>
2022-08-27 14:18:59 +02:00
lns
d9fadae718 nDPId: improved error messages if UNIX/UDP endpoint refuses connections/datagrams
Signed-off-by: lns <matzeton@googlemail.com>
2022-08-27 14:18:59 +02:00
Toni Uhlig
5e09a00062 nDPId: support for custom UDP endpoints
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-08-27 14:18:59 +02:00
lns
d0b0a50609 nDPId: improved error messages if UNIX/UDP endpoint refuses connections/datagrams
Signed-off-by: lns <matzeton@googlemail.com>
2022-08-27 13:04:17 +02:00
Toni Uhlig
e2e7c82d7f nDPId: support for custom UDP endpoints
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-08-27 13:04:17 +02:00
Toni Uhlig
0fd59f060e Split *_l4_payload_len' into *_src_l4_payload_len' and `*_dst_l4_payload_len'.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-08-15 22:55:19 +02:00
lns
905545487d Split flow_packets_processed' into flow_src_packets_processed' and `flow_dst_packets_processed'.
* no use for `flow_avg_l4_payload_len' -> removed
 * test/run_tests.sh does not fail if git-worktree's are used

Signed-off-by: lns <matzeton@googlemail.com>
2022-08-15 18:36:49 +02:00
Toni Uhlig
2cb2c86cb5 c-collectd: fixed incorrect PUTVAL
* get rid of types.db

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-08-15 16:42:59 +02:00
Toni
8c092dacfe Merge pull request #2 from verzulli/main
Improved README.
2022-08-12 18:10:34 +02:00
Damiano Verzulli
96b9129918 Improve README
- slightly improve the README to better enhance the streaming
  capability of `nDPId`, regardless of `nDPIsrvd`
- add a screencast showing the install step and the
  `nDPId` usage, alone (with ncat as unix-socket listener)
- add "build" to .gitignore
2022-08-12 11:10:45 +02:00
lns
ae37631e23 Do not SIGSEGV if a subopt has no value.
Signed-off-by: lns <matzeton@googlemail.com>
2022-08-08 09:33:26 +02:00
Toni Uhlig
ef94b83a62 Replaced outdated nDPI version info with the correct one.
* add CI job to verify the lowest known-to-work-libnDPI-version

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-14 04:09:15 +02:00
Toni Uhlig
fc442180da c-collectd: fixed possible undefined behavior
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-14 03:48:57 +02:00
Toni Uhlig
a606586a32 bump libnDPI to 7c19de49047a5731f3107ff17854e9afe839cc61
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-14 03:48:06 +02:00
Toni Uhlig
4a397ac646 Github Actions: Renamed branch 'master' to 'main'.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-12 16:06:05 +02:00
Toni Uhlig
28602ca095 README update
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-12 16:03:51 +02:00
Toni Uhlig
b5d4da8793 bump libnDPI to 8f6a006e36eef0ae386f7e663d3ebecfad6a2dc9
* try to use same wording wherever possible e.g.
   renamed workflow->total_l4_data_len to workflow->total_l4_payload_len

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-07-01 13:50:53 +02:00
Toni Uhlig
a80b6d7271 bump libnDPI to c287eb835b537ce64d9293a52ca13e670b6d3b0d
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-06-19 23:29:26 +02:00
lns
cdaeb1632e py-flow-dashboard: Improved graph axis scaling.
Signed-off-by: lns <matzeton@googlemail.com>
2022-06-16 11:37:33 +02:00
lns
2a8883a96e CMake: do not add /usr/include/ndpi to include dirs if BUILD_NDPI or STATIC_LIBNDPI_INSTALLDIR used.
* c-collectd: fixed memory leak on failure
 * py-flow-info.py: fancy spinners and stats counting improved

Signed-off-by: lns <matzeton@googlemail.com>
2022-06-10 14:34:30 +02:00
Toni Uhlig
664a8a077d Merge branch 'master' of github.com:lnslbrty/nDPId 2022-06-07 18:01:40 +02:00
Toni Uhlig
77a87254b6 nDPIsrvd.py: Throw SocketTimeout Exception to catch both timeout exceptions different Python versions can throw.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-06-07 17:59:47 +02:00
lns
3caf7727fd bump libnDPI to 0b3f8ed849cdf9971224c49a3958f0904a2bbbb5
* README/nDPId: fixed typ0

Signed-off-by: lns <matzeton@googlemail.com>
2022-06-06 00:34:13 +02:00
lns
f5b0021413 README update
Signed-off-by: lns <matzeton@googlemail.com>
2022-05-31 23:57:22 +02:00
Toni
73ca7fff3c Updated CI badges. 2022-05-08 21:41:01 +02:00
lns
4fde63b5c2 Small fixes.
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-26 11:44:31 +02:00
lns
0385653023 Github Actions: Build nDPId against lowest supported libnDPI release (4.2)
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-25 00:45:54 +02:00
lns
a46fc4153d nDPId: Merged nDPId_flow_(info|finished) into nDPId_flow
* nDPIsrvd: Fixed buffer allocation error due to missing memset() on disconnect
 * nDPIsrvd: Removed unused struct members

Signed-off-by: lns <matzeton@googlemail.com>
2022-04-24 23:49:57 +02:00
lns
22a8d04c74 Added proper DLT_RAW dissection for IPv4 and IPv6.
* nDPId: Improved TCP timeout handling if FIN/RST seen
	  which caused Midstream TCP flows when there shouldn't be any.
 * nDPIsrvd: Unified remote descriptor resource cleanup on disconnects/shutdown.
 * nDPIsrvd: Added additional error messages for remote descriptors.
 * py-flow-info: Better daemon status message printing.

Signed-off-by: lns <matzeton@googlemail.com>
2022-04-24 15:42:28 +02:00
lns
9aeff586bd bump libnDPI to 8b2c9860be8b0663bfe9fc3b6defc041bb90e5b2
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-18 19:26:27 +02:00
lns
c7bf94e9f1 nDPIsrvd.(h|py): Added socket read/recv timeout.
* nDPIsrvd.h: support for O_NONBLOCK nDPIsrvd_socket

Signed-off-by: lns <matzeton@googlemail.com>
2022-04-17 18:56:30 +02:00
lns
a2547321bb Added more CCs to Github Actions workflow.
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-17 11:28:59 +02:00
lns
c283b89afd Refactored buffer subsystem.
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-16 23:21:24 +02:00
lns
db83f82d29 Fixed build if BUILD_NDPI=ON. May happen during XCompilation.
Signed-off-by: lns <matzeton@googlemail.com>
2022-04-16 22:18:19 +02:00
Toni Uhlig
645aeaf5b4 Avoid CMake searching for gcrypt as default.
* Not necessary anymore coz libnDPI has now a builtin gcrypt-light

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-04-02 01:21:15 +02:00
Toni Uhlig
9f9e881b3f bump libnDPI to bb12837ca75efc2691ecb18fd5f56e2d097ef26b
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-24 02:16:33 +01:00
Toni Uhlig
65a9e5a18d Executing ./tests/run_tests.sh w/o zLib should not result in diff's anymore.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-24 01:04:49 +01:00
Toni Uhlig
c0b7bdacbc Reworked nDPIsrvd.h C-API.
* nDPIsrvd.h: Provide nDPId thread storage.
 * nDPIsrvd.py: Fixed instance cleanup bug.
 * nDPIsrvd.h: Support for instance/thread user data and cleanup callback.
 * nDPIsrvd.h: Most recent flow time stored in thread ht instead of instance ht.
 * nDPId: Moved flow logger out the memory profilier into SIGUSR1 signal handling.
 * nDPId: Added signal fd to be usable within epoll's event handling (live-capture only!)
 * nDPId: Added information about ZLib compressions to daemon status/shutdown events.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-21 15:56:01 +01:00
Toni Uhlig
daaaa61519 Renamed basic event to error event for the sake of the logic.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-13 03:08:44 +01:00
Toni Uhlig
ed1647b944 Disconnect nDPIsrvd clients immediately instead waiting for a failed write().
* nDPIsrvd: Collector/Distributor logging improved
 * nDPIsrvd: Command line option for max remote descriptors
 * nDPId: Stop spamming nDPIsrvd Collector with the same events over and over again
 * nDPId: Refactored some variable names and events

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-13 02:28:10 +01:00
Toni Uhlig
dd35d9da3f CI: Fixed missing lcov prereq.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-12 11:17:03 +01:00
Toni Uhlig
f884a538ce Code coverage generation using LCOV.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-11 18:29:38 +01:00
Toni Uhlig
41757ecf1c Added nDPIsrvd TCP/IP support for distributors.
* nDPIsrvd: Improved distributor client disconnect detection
 * nDPIsrvd: Fixed invalid usage of epoll_add instead of epoll_mod
 * nPDIsrvd: Improved logging for distributor clients

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-10 14:26:07 +01:00
Toni Uhlig
6f1f9e65ea Fixed some pyhton issues with static class members.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-08 14:17:24 +01:00
Toni Uhlig
d0985a5732 Fixed build error regarding missing LINKTYPE_* define's.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-06 17:58:25 +01:00
Toni Uhlig
e09dd8509f Updated examples/README.md
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-06 17:41:38 +01:00
Toni Uhlig
29c72fb30b Removed go-dashboard example.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-06 17:40:35 +01:00
Toni Uhlig
46f68501d5 Added daemon event: DAEMON_EVENT_STATUS (periodically send's daemon statistics.)
* Improved distributor timeout handling (per-thread).
 * flow-info.py / flow-dash.py: Distinguish between flow risk severities.
 * nDPId: Skip tag switch datalink packet dissection / processing.
 * nDPId: Fixed incorrect value for current active flows.
 * Improved JSON schema's.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-03-06 17:38:05 +01:00
Toni Uhlig
9db048c9d9 Serialize flow risk score / confidence.
* bump libnDPI to 8b062295cc76a60e3905c054ce37bd17669464d1
 * removed ndpi_id_struct's

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-02-27 02:53:39 +01:00
Toni Uhlig
cb80c415d8 Improved py-flow-info to provide more optional information about received timestamps.
* py-flow-dashboard: Added color mapping for PieCharts/Graph that make more sense
 * nDPId: Renamed `flow_type' to a more precisely `flow_state'
 * nDPId: Changed the default setting to process only as much packets as libnDPI does

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-02-05 15:27:13 +01:00
Toni Uhlig
6fd6dff14d Added additional (minimalistic) detection information to flow updates.
This will only affect flows with the state `FT_FINISHED' (detection done).

 * nDPIsrvd.py: force use of JSON schema Draft 7 validator
 * flow-dash.py: gather/use total processed layer4 payload size
 * flow-info.py: added additional event filter
 * flow-info.py: prettified flow events printing whose detection is in progress
 * py-semantic-validation.py: added validation checks for FT_FINISHED
 * updated flow event JSON schema

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-02-04 01:12:18 +01:00
Toni Uhlig
f9e4c58854 Added logging interface used by nDPId, nDPIsrvd and nDPId-test.
* fixed GitLab pipeline
 * nDPId: added static assert (just for a test)
 * nDPId: memory profiling for total bytes compressed
 * nDPId-test: enable zLib compression if configured with ENABLE_ZLIB

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-02-03 03:48:37 +01:00
Toni Uhlig
1a0d7ddbfa Process additional layer 3 protocols.
* bump libnDPI to c53c82d4823b5a8f856d1375155ac5112b68e8af
 * run_tests.sh: improved execution from non-git directories e.g. via `make dist`
 * updated JSON schema to be more restrictive
 * nDPId: splitted generic get_ip_from_sockaddr into IPv4/IPv6 to prevent compiler warnings on some platforms

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-31 20:54:02 +01:00
Toni Uhlig
7022d0b1c5 nDPIsrvd: Fixed memory leak caused be not clearing buffer cache after a client disconnected.
* README.md: Fixed a typ0 and added a meh image from examples/py-flow-dashboard/flow-dash.py

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-26 15:38:43 +01:00
Toni Uhlig
80e1eedbef nDPId: Added some error messages when workflow init fails.
* Fixed invalid array subscript typ0 (caused some trouble..)
 * bump libnDPI to 2cd0479204301c50c6149706fcd4df3058b2a8cc

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-26 15:12:28 +01:00
Toni Uhlig
4bae9d0344 py-flow-dashboard: added tab layout and event pie chart
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-26 02:34:10 +01:00
Toni Uhlig
29a1b13e7a Improved Plotly/Dash example. It is now somehow informative.
* TCP timeout after FIN/RST: switched back to the value from a35fc1d5ea
 * py-flow-info: reset 'guessed' flag after detection/detection-update received

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-25 11:16:41 +01:00
Toni Uhlig
9e07a57566 Major nDPId extension. Sorry for the huge commit.
- nDPId: fixed invalid IP4/IP6 tuple compare
 - nDPIsrvd: fixed caching issue (finally)
 - added tiny c example (can be used to check flow manager sanity)
 - c-captured: use flow_last_seen timestamp from `struct nDPIsrvd_flow`
 - README.md update: added example JSON sequence
 - nDPId: added new flow event `update` necessary for correct
   timeout handling (and other future use-cases)
 - nDPIsrvd.h and nDPIsrvd.py: switched to an instance
   (consists of an alias/source tuple) based flow manager
 - every flow related event **must** now serialize `alias`, `source`,
   `flow_id`, `flow_last_seen` and `flow_idle_time` to make the timeout
   handling and verification process work correctly
 - nDPIsrvd.h: ability to profile any dynamic memory (de-)allocation
 - nDPIsrvd.py: removed PcapPacket class (unused)
 - py-flow-dashboard and py-flow-multiprocess: fixed race condition
 - py-flow-info: print statusbar with probably useful information
 - nDPId/nDPIsrvd.h: switched from packet-flow only timestamps (`pkt_*sec`)
   to a generic flow event timestamp `ts_msec`
 - nDPId-test: added additional checks
 - nDPId: increased ICMP flow timeout
 - nDPId: using event based i/o if capturing packets from a device
 - nDPIsrvd: fixed memory leak on shutdown if remote descriptors
   were still connected

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2022-01-20 00:50:38 +01:00
Toni Uhlig
a35fc1d5ea Removed py-flow-undetected-to-pcap and py-risky-flow-to-pcap. Done by c-captured anyway.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-21 12:01:45 +01:00
Toni Uhlig
cfecf3e110 go-dashboard renaming, ignore go-mod and it's file structure
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-21 11:45:45 +01:00
Toni Uhlig
25b974af67 Use blocking I/O to prevent data loss if nDPIsrvd too slow.
* Fixed MemoryProfiler stack overflow.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-16 15:59:38 +01:00
Toni Uhlig
d389f04135 MemoryProfiling: Advanced flow usage logging.
* nDPId-test: disable #include <syslog.h> if NO_MAIN macro defined
 * nDPId-test: mock syslog flags and functions
 * gitlab-ci: force -Werror

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-13 17:30:21 +01:00
Toni Uhlig
9075706714 nDPId-test: Set max buffer size for remote descriptors useful to test caching/buffering.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-04 14:08:25 +01:00
Toni Uhlig
1f6d1fbd67 Added timestamp validation test.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-02 12:15:41 +01:00
Toni Uhlig
d93c33aa74 Additional semantic validation tests.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-11-02 09:26:23 +01:00
Toni Uhlig
8ecd1b48ef c-captured: Improved format string in nDPIsrvd_write_flow_info_cb.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-10-08 19:01:39 +02:00
Toni Uhlig
3af8de5a58 Fixed compile error due to missing stdint.h include before ndpi_typedefs.h
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-10-08 11:42:27 +02:00
Toni Uhlig
315f90f982 Fixed invalid "flow_last_seen" timestamp for the first packet.
* After the first packet was processed, "flow_last_seen" was still 0.
   This behaviour is invalid as the first packet may contain l4 payload data e.g. for UDP
   and it also breaks nDPId json consistency "flow_first_seen" > 0, but "flow_last_seen" == 0.
 * JSON schema: set minimum timestamp value for Epoch timestamps to 24710 for flow_*_seen and
   1 for pcap packet ts. Those values are dependant on some manipulated pcap's in libnDPI/tests/pcap.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-10-08 11:31:58 +02:00
Toni Uhlig
fe77c44e3f Added support/debug function to write flow(-user) related info.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-10-08 10:56:23 +02:00
Toni Uhlig
3726311276 bump libnDPI to 181a03c5ad41bda533fbfa307627939c2ff30b75
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-10-05 23:39:11 +02:00
Toni Uhlig
a523c348f3 More CMake warnings/errors/fixes added.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-29 18:40:12 +02:00
Toni Uhlig
5a6b2aa261 CMake and CI extensions
* CPack support for debian packages
 * Use CPack version string for nDPId

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-29 15:46:47 +02:00
Toni Uhlig
992d3a207d dumb fuzzer: randpkt vs nDPId-test
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-20 00:28:44 +02:00
Toni Uhlig
7829bfe4e6 CI extended and fixups
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-19 11:30:55 +02:00
Toni Uhlig
4fa1694b05 Github Actions integration
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-17 18:59:49 +02:00
Toni Uhlig
c5be804725 Removed Travis-CI support as they do not support OpenSource anymore.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-16 16:48:55 +02:00
Toni Uhlig
655f38b68f Fixed some typ0's and reduced ICMP timeout to 10s.
* nDPId: Renamed some of the misleading terms, still TODO for nDPIsrvd
 * CMake improvments

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-16 16:43:30 +02:00
Toni Uhlig
4edf3bf7e6 Merge commit '1fa53c5bf8d0717f784c79abaa5111f88ab00221' 2021-09-15 17:04:21 +02:00
Toni Uhlig
1fa53c5bf8 Squashed 'dependencies/uthash/' changes from 8e67ced..bf15263
bf15263 Fix a "bug" in the example where option 3 interfered with option 1's counter.
b6e24ef Use `malloc(sizeof *s)` in example code.
a109c6b Stop using `gets` in example.c.
c85c9e1 fix: fix utstack example's compiling error
86e6776 Replace *.github.com urls with *.github.io (#227)
e493aa9 Bump version to 2.3.0.
ae2ac52 Fix README.md to display the *actual* TravisCI status.
134e241 Silence -Wswitch-default warnings, and add it to the TravisCI config.
62fefa6 Fix some typos in userguide.txt, and re-remove spaces in macro definitions.
37d2021 tests: add whitespaces to example code
524ca1a doc: add whitespaces to documentation
0f6c619 Fix a typo in the documentation for HASH_COUNT. NFC.
388134a Rename uthash_memcmp to HASH_KEYCMP, step 3.
053bed1 Eliminate HASH_FCN; change the handling of HASH_FUNCTION to match HASH_KEYCMP.
f0e1bd9 Refactor test93.c to avoid scan-build warnings.
45af88c Remove two dead writes in tests, to silence scan-build warnings.
66e2668 Bump version to 2.2.0.
973bd67 uthash.h: Swap multiplicands to put the widest ones first.
15ad042 Always include <stdint.h>, unless HASH_NO_STDINT is defined by the user.
6b4768b Rename uthash_memcmp to HASH_KEYCMP, step 2.
e64c7f0 Update tests/README to describe the most recently added tests. NFC.
c62796c HASH_CLEAR after some tests, to eliminate "memory leak" warnings.
7f0aadb Support spaces in $exe path
0831d9a uthash.h: fix compiler warning -Wcast-qual
ba2fbfd utarray.h: preserve constness in utarray_str_cpy

git-subtree-dir: dependencies/uthash
git-subtree-split: bf15263081be6229be31addd48566df93921cb46
2021-09-15 17:04:21 +02:00
Toni Uhlig
2a5e5a020b Merge commit '8e096b19c1e0b45ccd43cc89d9d80b59bd783529' 2021-09-15 17:03:59 +02:00
Toni Uhlig
8e096b19c1 Squashed 'dependencies/jsmn/' changes from 053d3cd..1aa2e8f
1aa2e8f Update README.md (#203)
b85f161 Update README.md (#213)
23f13d2 Merge pull request #108 from olmokramer/patch-1

git-subtree-dir: dependencies/jsmn
git-subtree-split: 1aa2e8f80849c983466b165d53542da9b1bd1b32
2021-09-15 17:03:59 +02:00
Toni Uhlig
e54c2df63b nDPIsrvd: Fixed anther bug, introduced during refactoring -_-
nDPId-test: Collect information about JSON string length's.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-15 14:33:13 +02:00
Toni Uhlig
c152e41cfb README.md ascii update
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-14 18:54:33 +02:00
Toni Uhlig
aa89800ff9 fixed Warnings / build error / cosmetics
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-14 18:38:37 +02:00
Toni Uhlig
ea0b04d648 bump libnDPI to 0eb7a0388c4549ebbf8cd7a10d398088005cc2de
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-09-14 18:19:47 +02:00
Toni Uhlig
6faded3cc7 Improved and Fixed another buffering issue caused by removing an outgoing fd too early from epoll queue (EPOLLOUT).
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-08-05 02:02:51 +02:00
Toni Uhlig
d48508b4af Improved nDPIsrvd buffer bloat handling using caching.
* still allow blocking mode (with send timeout)
 * improved daemon start/stop test script

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-08-04 17:19:15 +02:00
Toni Uhlig
f4c8d96dd9 Gitlab-CI
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-08-03 13:06:12 +02:00
Toni Uhlig
3a76035570 bump libnDPI to 6b7e5fa8d251f11c1bae16ea892a43a92b098480
* fixed linking issue by using CMake to check if explicit link against libm required
 * make nDPIsrvd collectd exit if parent pid changed, meaning that collectd died somehow
 * nDPId-test restores SIGPIPE to the default handler (termination), so abnormal connection drop's do now have consequences

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-08-03 12:37:59 +02:00
Toni Uhlig
c32461b032 bump libnDPI to b95bd0358fd43d9fdfdc5266e3c8923b91e1d4db
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-07-14 11:55:17 +02:00
Toni Uhlig
6f04807236 Build JSMN with support for parent links.
* nDPIsrvd.h: iterate over subtokens
 * nDPIsrvd-captured: select/ unselect risky flows to capture

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-07-13 03:35:35 +02:00
Toni Uhlig
19e4038ce5 bump libnDPI to ced6fca184a4549333c2d582e53419f66cd99ec1
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-29 17:32:37 +02:00
Toni Uhlig
7d6366ebfc Updated CMake nDPId-test target;
* w/o zLib
 * gcrypt requires to be enabled

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-18 14:34:09 +02:00
Toni Uhlig
114365a480 Enable memory profiling for nDPId-test.
* print a summary

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-18 13:37:44 +02:00
Toni Uhlig
db87d45edb Added zLib compression parameters to control compression conditions.
* more structs are now "compressable"
 * fixed missing DAEMON_RECONNECT event
 * improved memory profiler

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-16 19:28:02 +02:00
Toni Uhlig
fac7648326 Support for zLib flow memory compression. Experimental.
Please use this feature only for testing purposes.
It will change or be removed in the future.

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-14 15:33:29 +02:00
Toni Uhlig
98b11f814f Removed setting CC, CFLAGS and LDFLAGS explicitly for libnDPI build (BUILD_NDPI=ON).
* for xcompile targets e.g. for OpenWrt, this env vars are already set

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-09 14:14:25 +02:00
Toni Uhlig
e20280cb43 libndpi update
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-09 11:38:31 +02:00
Toni Uhlig
4d6ea33aa4 Trying to fix BUILD_NDPI for xcompilation.
* added a CMake warning as well

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-09 00:28:54 +02:00
Toni Uhlig
55ecf068b3 Generate a valid version tuple if build was triggered from an unpacked make dist archive.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-08 21:20:44 +02:00
Toni Uhlig
d3ebb84ce4 Fixed broken libnDPI build (BUILD_NDPI=ON) if Ninja used as Generator.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-08 21:00:18 +02:00
Toni Uhlig
7daeee141d make dist
* fixed run_tests.sh file check bug, CI compat
 * updated results due to libnDPI submodule update

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-08 16:18:54 +02:00
Toni Uhlig
a41ddafa88 Git tag/commit version printing for nDPId/nDPIsrvd. Reduces confusion.
* disabled subshell spawn for run_tests.sh, common pitfall while using counters

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-08 15:23:33 +02:00
Toni Uhlig
30502ff0a0 Fixed make daemon target.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-07 19:35:45 +02:00
Toni Uhlig
5954e46340 Build system cleanup / cosmetics.
* libnDPI submodule update

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-07 16:22:49 +02:00
Toni Uhlig
54e0601fec Unified IO buffer mgmt.
* c-collectd gives the user control over collectd-exec instance name
 * added missing collectd type `flow_l4_icmp_count`

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-06-07 15:04:46 +02:00
Toni Uhlig
382706cd20 flow-dash: Simplified and extended bar graph.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-28 18:41:32 +02:00
Toni Uhlig
96dc563d91 flow-dash: Added live bars visualising midstream/risky flow count.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-28 02:14:23 +02:00
Toni Uhlig
12e0ae98b6 Added realtime web based graph example using Plotly/Dash.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-27 15:05:06 +02:00
Toni Uhlig
2a59c0513c libnDPI updated to c4084ca3c7b3657659aff624158a9c4f5710f57d
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-26 17:26:07 +02:00
Toni Uhlig
e3d1a8a772 Added simple Python Multiprocess example.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-26 17:18:20 +02:00
Toni Uhlig
4b6ead68a1 nDPIsrvd-captured: skip empty flows based on flow total payload length
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-20 15:40:36 +02:00
Toni Uhlig
9a1c2d0ea7 Reworked layer 4 flow length naming/calculation.
* nDPIsrvd services usually do not care about layer4 data length,
   payload length is quite more essential for further processing

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-20 14:55:05 +02:00
Toni Uhlig
db39772aa7 Fixed CMake global CFLAGS misuse which can cause xcompile errors.
nDPIsrvd-captured supports skipping flows w/o any layer 4 payload.

 * libndpi update
 * run_tests does not generate any *.out files for fuzz-*.pcap anymore and
   does not fail if nDPId-test exits with value 1 (most likely caused by a libpcap failure)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-19 15:56:20 +02:00
Toni Uhlig
9ffaeef24d README.md update
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-14 00:16:47 +02:00
Toni Uhlig
3a0fbe7433 Cosmetic fixes.
* daemon.sh script to simplify daemon testing

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-13 20:08:27 +02:00
Toni Uhlig
da4942b41c Use layer4 specific flow timeouts.
* default values "stolen" from nf_conntrack

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-13 15:41:24 +02:00
Toni Uhlig
182867a071 Reduced superfluous Travis-CI yaml content.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-12 15:17:07 +02:00
Toni Uhlig
241a7fdc4f Added missing datalink types.
* basicially C&P from nDPI reader_utils but with some more sanity checks

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-12 13:48:49 +02:00
Toni Uhlig
fa079d2346 Git submodule libnDPI update.
* enable ctest to run integration tests (**only** if BUILD_NDPI=ON)

Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-12 12:46:49 +02:00
Toni Uhlig
50f9c1bba1 OpenWrt compatible build system.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-11 17:51:57 +02:00
Toni Uhlig
98a6dc5d3b Added GPL-3 License.
Signed-off-by: Toni Uhlig <matzeton@googlemail.com>
2021-05-11 16:33:34 +02:00
1735 changed files with 179573 additions and 96272 deletions

71
.github/workflows/build-openwrt.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: OpenWrt Build
on:
push:
branches:
- main
- tmp
pull_request:
branches:
- master
types: [opened, synchronize, reopened]
release:
types: [created]
jobs:
build:
name: ${{ matrix.arch }} build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arch: arc_archs
target: archs38-generic
- arch: arm_cortex-a9_vfpv3-d16
target: mvebu-cortexa9
- arch: mips_24kc
target: ath79-generic
- arch: mipsel_24kc
target: mt7621
- arch: powerpc_464fp
target: apm821xx-nand
- arch: powerpc_8540
target: mpc85xx-p1010
- arch: aarch64_cortex-a53
target: mvebu-cortexa53
- arch: arm_cortex-a15_neon-vfpv4
target: armvirt-32
- arch: i386_pentium-mmx
target: x86-geode
- arch: x86_64
target: x86-64
steps:
- uses: actions/checkout@v3
with:
submodules: false
fetch-depth: 1
- name: Build
uses: openwrt/gh-action-sdk@master
env:
ARCH: ${{ matrix.arch }}
FEED_DIR: ${{ github.workspace }}/packages/openwrt
FEEDNAME: ndpid_openwrt_packages_ci
PACKAGES: nDPId-testing
- name: Store packages
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.arch}}-packages
path: bin/packages/${{ matrix.arch }}/ndpid_openwrt_packages_ci/*.ipk

118
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,118 @@
name: Build
on:
push:
branches:
- main
- tmp
pull_request:
branches:
- main
types: [opened, synchronize, reopened]
release:
types: [created]
jobs:
test:
name: ${{ matrix.os }} ${{ matrix.gcrypt }}
runs-on: ${{ matrix.os }}
env:
CMAKE_C_COMPILER: ${{ matrix.compiler }}
CMAKE_C_FLAGS: -Werror
strategy:
fail-fast: true
matrix:
os: ["ubuntu-latest", "ubuntu-18.04"]
ndpid_gcrypt: ["-DNDPI_WITH_GCRYPT=OFF", "-DNDPI_WITH_GCRYPT=ON"]
ndpid_zlib: ["-DENABLE_ZLIB=OFF", "-DENABLE_ZLIB=ON"]
ndpi_min_version: ["4.5"]
include:
- compiler: "default-cc"
os: "ubuntu-latest"
sanitizer: "-DENABLE_SANITIZER=ON"
- compiler: "clang-12"
os: "ubuntu-latest"
sanitizer: "-DENABLE_SANITIZER_THREAD=ON"
- compiler: "gcc-10"
os: "ubuntu-latest"
sanitizer: "-DENABLE_SANITIZER=ON"
- compiler: "gcc-7"
os: "ubuntu-18.04"
sanitizer: "-DENABLE_SANITIZER=ON"
steps:
- uses: actions/checkout@v3
with:
submodules: false
fetch-depth: 1
- name: Install Ubuntu Prerequisites
if: startsWith(matrix.os, 'ubuntu')
run: |
sudo apt-get update
sudo apt-get install autoconf automake cmake libtool pkg-config gettext libjson-c-dev flex bison libpcap-dev zlib1g-dev
sudo apt-get install ${{ matrix.compiler }} lcov iproute2
sudo apt-get install rpm alien
- name: Install Ubuntu Prerequisites (libgcrypt)
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=ON')
run: |
sudo apt-get install libgcrypt20-dev
- name: Install Ubuntu Prerequisities (zlib)
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_zlib, '-DENABLE_ZLIB=ON')
run: |
sudo apt-get install zlib1g-dev
- name: Checking Network Buffer Size
run: |
C_VAL=$(cat config.h | sed -n 's/^#define\s\+NETWORK_BUFFER_MAX_SIZE\s\+\([0-9]\+\).*$/\1/gp')
PY_VAL=$(cat dependencies/nDPIsrvd.py | sed -n 's/^NETWORK_BUFFER_MAX_SIZE = \([0-9]\+\).*$/\1/gp')
test ${C_VAL} = ${PY_VAL}
- name: Configure nDPId
run: |
mkdir build && cd build
cmake .. -DENABLE_SYSTEMD=ON -DENABLE_COVERAGE=ON -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON ${{ matrix.sanitizer }} ${{ matrix.ndpid_zlib }} ${{ matrix.ndpid_gcrypt }}
- name: Build nDPId
run: |
make -C build all VERBOSE=1
- name: Test EXEC
run: |
./build/nDPId-test || test $? -eq 1
./build/nDPId -h || test $? -eq 1
./build/nDPIsrvd -h || test $? -eq 1
- name: Test DIFF
if: startsWith(matrix.os, 'ubuntu') && startsWith(matrix.ndpid_gcrypt, '-DNDPI_WITH_GCRYPT=OFF')
run: |
./test/run_tests.sh ./libnDPI ./build/nDPId-test
- name: Daemon
run: |
make -C ./build daemon VERBOSE=1
make -C ./build daemon VERBOSE=1
- name: Coverage
run: |
make -C ./build coverage
- name: Dist
run: |
make -C ./build dist
- name: CPack DEB
run: |
cd ./build && cpack -G DEB && sudo dpkg -i nDPId-*.deb && cd ..
- name: CPack RPM
run: |
cd ./build && cpack -G RPM
- name: systemd test
if: startsWith(matrix.os, 'ubuntu-latest') && startsWith(matrix.compiler, 'default-cc')
run: |
sudo systemctl daemon-reload
sudo systemctl enable ndpid@lo
sudo systemctl start ndpid@lo
sudo systemctl status ndpisrvd.service ndpid@lo.service
sudo systemctl show ndpisrvd.service ndpid@lo.service -p SubState,ActiveState
- name: Build against libnDPI-${{ matrix.ndpi_min_version }}
run: |
mkdir build-local-ndpi && cd build-local-ndpi
WGET_RET=0
wget 'https://github.com/ntop/nDPI/archive/refs/tags/${{ matrix.ndpi_min_version }}.tar.gz' || { WGET_RET=$?; true; }
echo "wget returned: ${WGET_RET}"
test $WGET_RET -ne 8 || echo "::warning file=nDPId.c::New libnDPI release required to build against release tarball."
test $WGET_RET -ne 0 || { tar -xzvf ${{ matrix.ndpi_min_version }}.tar.gz && cd nDPI-${{ matrix.ndpi_min_version }} && ./autogen.sh --prefix=/usr --with-only-libndpi CC=${{ matrix.compiler }} CXX=false CFLAGS='-Werror' && sudo make install && cd .. ; }
test $WGET_RET -ne 0 || { echo "running cmake .."; cmake .. -DENABLE_COVERAGE=ON -DBUILD_EXAMPLES=ON -DBUILD_NDPI=OFF -DENABLE_SANITIZER=ON ${{ matrix.ndpi_min_version }} ; }
test $WGET_RET -ne 0 || { echo "running make .."; make all VERBOSE=1 ; }
test $WGET_RET -eq 0 -o $WGET_RET -eq 8

6
.gitignore vendored
View File

@@ -4,3 +4,9 @@ __pycache__
# go related
*.sum
# lockfiles generated by some shell scripts
*.lock
# building folder
build

127
.gitlab-ci.yml Normal file
View File

@@ -0,0 +1,127 @@
image: debian:stable
stages:
- build_and_test
before_script:
- export DEBIAN_FRONTEND=noninteractive
- apt-get update -qq
- >
apt-get install -y -qq \
coreutils sudo \
build-essential make cmake binutils gcc clang autoconf automake \
libtool pkg-config git \
libpcap-dev libgpg-error-dev libjson-c-dev zlib1g-dev \
netcat-openbsd python3 python3-jsonschema tree lcov iproute2
after_script:
- test -r /tmp/nDPIsrvd.log && cat /tmp/nDPIsrvd.log
- test -r /tmp/nDPId.log && cat /tmp/nDPId.log
build_and_test_static_libndpi_tsan:
script:
# test for NETWORK_BUFFER_MAX_SIZE C and Python value equality
- C_VAL=$(cat config.h | sed -n 's/^#define\s\+NETWORK_BUFFER_MAX_SIZE\s\+\([0-9]\+\).*$/\1/gp')
- PY_VAL=$(cat dependencies/nDPIsrvd.py | sed -n 's/^NETWORK_BUFFER_MAX_SIZE = \([0-9]\+\).*$/\1/gp')
- test ${C_VAL} = ${PY_VAL}
# test for nDPId_PACKETS_PLEN_MAX C and Python value equality
- C_VAL=$(cat config.h | sed -n 's/^#define\s\+nDPId_PACKETS_PLEN_MAX\s\+\([0-9]\+\).*$/\1/gp')
- PY_VAL=$(cat dependencies/nDPIsrvd.py | sed -n 's/^nDPId_PACKETS_PLEN_MAX = \([0-9]\+\).*$/\1/gp')
- test ${C_VAL} = ${PY_VAL}
# static linked build
- mkdir build-clang-tsan
- cd build-clang-tsan
- env CMAKE_C_FLAGS='-Werror' CMAKE_C_COMPILER='clang' cmake .. -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON -DENABLE_SANITIZER_THREAD=ON -DENABLE_ZLIB=ON
- make distclean-libnDPI
- make libnDPI
- tree libnDPI
- make install VERBOSE=1 DESTDIR="$(realpath ../_install)"
- cd ..
- ./test/run_tests.sh ./libnDPI ./_install/usr/local/bin/nDPId-test
artifacts:
expire_in: 1 week
paths:
- _install/
stage: build_and_test
build_and_test_static_libndpi:
script:
- mkdir build-cmake-submodule
- cd build-cmake-submodule
- env CMAKE_C_FLAGS='-Werror' cmake .. -DENABLE_SYSTEMD=ON -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON -DENABLE_ZLIB=ON
- make distclean-libnDPI
- make libnDPI
- tree libnDPI
- make install VERBOSE=1 DESTDIR="$(realpath ../_install)"
- cpack -G DEB
- sudo dpkg -i nDPId-*.deb
- cd ..
- test -x /bin/systemctl && sudo systemctl daemon-reload
- test -x /bin/systemctl && sudo systemctl enable ndpid@lo
- test -x /bin/systemctl && sudo systemctl start ndpid@lo
- test -x /bin/systemctl && sudo systemctl status ndpisrvd.service ndpid@lo.service
- test -x /bin/systemctl && sudo systemctl stop ndpid@lo
- ./test/run_tests.sh ./libnDPI ./build-cmake-submodule/nDPId-test
- >
if ldd ./build-cmake-submodule/nDPId | grep -qoEi libndpi; then \
echo 'nDPId linked against a static libnDPI should not contain a shared linked libnDPI.' >&2; false; fi
artifacts:
expire_in: 1 week
paths:
- build-cmake-submodule/*.deb
- _install/
stage: build_and_test
build_and_test_static_libndpi_coverage:
script:
- mkdir build-cmake-submodule
- cd build-cmake-submodule
- env CMAKE_C_FLAGS='-Werror' cmake .. -DENABLE_SYSTEMD=ON -DENABLE_COVERAGE=ON -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON -DENABLE_SANITIZER=ON -DENABLE_ZLIB=ON
- make distclean-libnDPI
- make libnDPI
- tree libnDPI
- make install VERBOSE=1 DESTDIR="$(realpath ../_install)"
- cd ..
- ./test/run_tests.sh ./libnDPI ./build-cmake-submodule/nDPId-test
# generate coverage report
- make -C ./build-cmake-submodule coverage
- >
if ldd build/nDPId | grep -qoEi libndpi; then \
echo 'nDPId linked against a static libnDPI should not contain a shared linked libnDPI.' >&2; false; fi
artifacts:
expire_in: 1 week
paths:
- build-cmake-submodule/coverage_report
- _install/
stage: build_and_test
build_dynamic_libndpi:
script:
# pkg-config dynamic linked build
- git clone https://github.com/ntop/nDPI.git
- cd nDPI
- ./autogen.sh --prefix="$(realpath ../_install)" --enable-option-checking=fatal
- make install V=s
- cd ..
- tree ./_install
- mkdir build
- cd build
- export CMAKE_PREFIX_PATH="$(realpath ../_install)"
- env CMAKE_C_FLAGS='-Werror' cmake .. -DBUILD_EXAMPLES=ON -DENABLE_SANITIZER=ON -DENABLE_MEMORY_PROFILING=ON -DENABLE_ZLIB=ON
- make all VERBOSE=1
- make install VERBOSE=1 DESTDIR="$(realpath ../_install)"
- cd ..
- tree ./_install
- ./build/nDPId-test || test $? -eq 1
- ./build/nDPId -h || test $? -eq 1
- ./build/nDPIsrvd -h || test $? -eq 1
# dameon start/stop test
- NUSER=nobody make -C ./build daemon VERBOSE=1
- NUSER=nobody make -C ./build daemon VERBOSE=1
# make dist
- make -C ./build dist
artifacts:
expire_in: 1 week
paths:
- _install/
stage: build_and_test

8
.gitmodules vendored
View File

@@ -1,3 +1,11 @@
[submodule "libnDPI"]
path = libnDPI
url = https://github.com/ntop/nDPI
branch = dev
update = rebase
[submodule "examples/js-rt-analyzer"]
path = examples/js-rt-analyzer
url = https://gitlab.com/verzulli/ndpid-rt-analyzer.git
[submodule "examples/js-rt-analyzer-frontend"]
path = examples/js-rt-analyzer-frontend
url = https://gitlab.com/verzulli/ndpid-rt-analyzer-frontend.git

View File

@@ -1,13 +0,0 @@
language: c
before_install:
- sudo apt-get -qq update
- sudo apt-get install -y build-essential make binutils gcc autoconf automake libtool pkg-config git libpcap-dev libgcrypt-dev libgpg-error-dev libjson-c-dev netcat-openbsd python3 python3-jsonschema
script:
- git submodule update --init
# static linked build
- mkdir build-cmake-submodule && cd build-cmake-submodule && cmake .. -DBUILD_EXAMPLES=ON -DBUILD_NDPI=ON -DENABLE_SANITIZER=ON && make && cd ..
# pkg-config dynamic linked build
- PKG_CONFIG_PATH="$(realpath ./build-cmake-submodule/libnDPI/lib/pkgconfig)" cmake . -DBUILD_EXAMPLES=ON -DENABLE_SANITIZER=ON -DENABLE_MEMORY_PROFILING=ON && make
- ./nDPId-test || test $? -eq 1
- ./nDPId -h || test $? -eq 1
- ./test/run_tests.sh

View File

@@ -1,46 +1,205 @@
cmake_minimum_required(VERSION 3.12.4)
project(nDPId C)
if("${PROJECT_SOURCE_DIR}" STREQUAL "${PROJECT_BINARY_DIR}")
message(FATAL_ERROR "In-source builds are not allowed.\n"
"Please remove ${PROJECT_SOURCE_DIR}/CMakeCache.txt\n"
"and\n"
"${PROJECT_SOURCE_DIR}/CMakeFiles\n"
"Create a build directory somewhere and run CMake again.")
endif()
set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake)
find_package(PkgConfig REQUIRED)
set(CMAKE_PROJECT_HOMEPAGE_URL "https://github.com/utoni/nDPId")
set(CPACK_PACKAGE_NAME "nDPId")
set(CPACK_PACKAGE_CONTACT "toni@impl.cc")
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "Tiny nDPI based deep packet inspection daemons / toolkit.")
set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.md")
set(CPACK_RESOURCE_FILE_README "${CMAKE_CURRENT_SOURCE_DIR}/README.md")
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/COPYING")
set(CPACK_PACKAGE_VERSION_MAJOR 1)
set(CPACK_PACKAGE_VERSION_MINOR 5)
set(CPACK_PACKAGE_VERSION_PATCH 0)
set(CPACK_DEBIAN_PACKAGE_SHLIBDEPS ON)
set(CPACK_RPM_PACKAGE_LICENSE "GPL-3.0")
include(CPack)
include(CheckFunctionExists)
if(NOT MATH_FUNCTION_EXISTS AND NOT NEED_LINKING_AGAINST_LIBM)
CHECK_FUNCTION_EXISTS(log2f MATH_FUNCTION_EXISTS)
if(NOT MATH_FUNCTION_EXISTS)
unset(MATH_FUNCTION_EXISTS CACHE)
list(APPEND CMAKE_REQUIRED_LIBRARIES m)
CHECK_FUNCTION_EXISTS(log2f MATH_FUNCTION_EXISTS)
if(MATH_FUNCTION_EXISTS)
set(NEED_LINKING_AGAINST_LIBM TRUE CACHE BOOL "" FORCE)
else()
message(FATAL_ERROR "Failed making the log2f() function available")
endif()
endif()
endif()
if(NEED_LINKING_AGAINST_LIBM)
set(LIBM_LIB "-lm")
else()
set(LIBM_LIB "")
endif()
option(ENABLE_COVERAGE "Generate a code coverage report using lcov/genhtml." OFF)
option(ENABLE_SANITIZER "Enable ASAN/LSAN/UBSAN." OFF)
option(ENABLE_SANITIZER_THREAD "Enable TSAN (does not work together with ASAN)." OFF)
option(ENABLE_MEMORY_PROFILING "Enable dynamic memory tracking." OFF)
option(ENABLE_ZLIB "Enable zlib support for nDPId (experimental)." OFF)
option(ENABLE_SYSTEMD "Install systemd components." OFF)
option(ENABLE_GNUTLS "Enable GnuTLS support for nDPIsrvd TCP connections." ON)
option(BUILD_EXAMPLES "Build C examples." ON)
option(BUILD_NDPI "Clone and build nDPI from github." OFF)
if(BUILD_NDPI)
unset(NDPI_NO_PKGCONFIG CACHE)
unset(STATIC_LIBNDPI_INSTALLDIR CACHE)
else()
option(NDPI_NO_PKGCONFIG "Do not use pkgconfig to search for libnDPI." OFF)
if(NDPI_NO_PKGCONFIG)
set(STATIC_LIBNDPI_INSTALLDIR "/opt/libnDPI/usr" CACHE STRING "Path to a installation directory of libnDPI e.g. /opt/libnDPI/usr")
if(STATIC_LIBNDPI_INSTALLDIR STREQUAL "")
message(FATAL_ERROR "STATIC_LIBNDPI_INSTALLDIR can not be an empty string within your configuration!")
endif()
else()
unset(STATIC_LIBNDPI_INSTALLDIR CACHE)
endif()
endif()
set(STATIC_LIBNDPI_INSTALLDIR "" CACHE STRING "Path to a installation directory of libnDPI e.g. /opt/libnDPI/usr")
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI OR NDPI_NO_PKGCONFIG)
option(NDPI_WITH_GCRYPT "Link static libndpi library against libgcrypt." OFF)
option(NDPI_WITH_PCRE "Link static libndpi library against libpcre." OFF)
option(NDPI_WITH_MAXMINDDB "Link static libndpi library against libmaxminddb." OFF)
else()
unset(NDPI_WITH_GCRYPT CACHE)
unset(NDPI_WITH_PCRE CACHE)
unset(NDPI_WITH_MAXMINDDB CACHE)
endif()
add_executable(nDPId nDPId.c utils.c)
add_executable(nDPIsrvd nDPIsrvd.c utils.c)
add_executable(nDPId-test nDPId-test.c utils.c)
add_executable(nDPId-test nDPId-test.c)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra -DJSMN_STATIC=1 -DJSMN_STRICT=1")
set(BUILD_NDPI_CONFIGURE_OPTS "")
add_custom_target(dist)
add_custom_command(
TARGET dist
COMMAND "${CMAKE_SOURCE_DIR}/scripts/make-dist.sh"
)
if(ENABLE_MEMORY_PROFILING)
set(MEMORY_PROFILING_CFLAGS "-DENABLE_MEMORY_PROFILING=1"
"-Duthash_malloc=nDPIsrvd_uthash_malloc"
"-Duthash_free=nDPIsrvd_uthash_free")
else()
set(MEMORY_PROFILING_CFLAGS "")
add_custom_target(daemon)
add_custom_command(
TARGET daemon
COMMAND env nDPIsrvd_ARGS='-C 1024' "${CMAKE_SOURCE_DIR}/scripts/daemon.sh" "$<TARGET_FILE:nDPId>" "$<TARGET_FILE:nDPIsrvd>"
DEPENDS nDPId nDPIsrvd
)
if(CMAKE_CROSSCOMPILING)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
endif()
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
if(BUILD_NDPI)
enable_testing()
add_test(NAME run_tests
COMMAND "${CMAKE_SOURCE_DIR}/test/run_tests.sh"
"${CMAKE_SOURCE_DIR}/libnDPI"
"$<TARGET_FILE:nDPId-test>")
if(NDPI_WITH_PCRE OR NDPI_WITH_MAXMINDDB)
message(WARNING "NDPI_WITH_PCRE or NDPI_WITH_MAXMINDDB enabled.\n"
"${CMAKE_CURRENT_SOURCE_DIR}/test/run_tests.sh or ctest will fail!")
endif()
endif()
if(ENABLE_COVERAGE)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fprofile-arcs -ftest-coverage")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} --coverage")
add_custom_target(coverage DEPENDS "${CMAKE_BINARY_DIR}/coverage_report/nDPId/index.html")
add_custom_command(
OUTPUT "${CMAKE_BINARY_DIR}/coverage_report/nDPId/index.html"
COMMAND lcov --directory "${CMAKE_BINARY_DIR}" --directory "${CMAKE_SOURCE_DIR}/libnDPI" --capture --output-file "${CMAKE_BINARY_DIR}/lcov.info"
COMMAND genhtml -o "${CMAKE_BINARY_DIR}/coverage_report" "${CMAKE_BINARY_DIR}/lcov.info"
DEPENDS nDPId nDPId-test nDPIsrvd
)
add_custom_target(coverage-view)
add_custom_command(
TARGET coverage-view
COMMAND cd "${CMAKE_BINARY_DIR}/coverage_report" && python3 -m http.server
DEPENDS "${CMAKE_BINARY_DIR}/coverage_report/nDPId/index.html"
)
endif()
if(ENABLE_SANITIZER)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=leak")
endif()
if(ENABLE_SANITIZER_THREAD)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=undefined -fno-sanitize=alignment -fsanitize=enum -fsanitize=thread")
endif()
if(ENABLE_ZLIB)
set(NDPID_DEFS ${NDPID_DEFS} -DENABLE_ZLIB=1)
pkg_check_modules(ZLIB REQUIRED zlib)
endif()
if(ENABLE_GNUTLS)
set(NDPID_DEFS ${NDPID_DEFS} -DENABLE_GNUTLS=1)
pkg_check_modules(GNUTLS REQUIRED gnutls)
endif()
if(NDPI_WITH_GCRYPT)
message(STATUS "nDPI: Enable GCRYPT")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-local-libgcrypt")
endif()
if(NDPI_WITH_PCRE)
message(STATUS "nDPI: Enable PCRE")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-pcre")
endif()
if(NDPI_WITH_MAXMINDDB)
message(STATUS "nDPI: Enable MAXMINDDB")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --with-maxminddb")
endif()
if(ENABLE_COVERAGE)
message(STATUS "nDPI: Enable Coverage")
set(NDPI_ADDITIONAL_ARGS "${NDPI_ADDITIONAL_ARGS} --enable-code-coverage")
endif()
execute_process(
COMMAND git describe --tags
WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}"
OUTPUT_VARIABLE GIT_VERSION ERROR_QUIET)
string(STRIP "${GIT_VERSION}" GIT_VERSION)
if(GIT_VERSION STREQUAL "" OR NOT IS_DIRECTORY "${CMAKE_SOURCE_DIR}/.git")
if(CMAKE_BUILD_TYPE STREQUAL "Debug" OR CMAKE_BUILD_TYPE STREQUAL "")
set(GIT_VERSION "${CPACK_PACKAGE_VERSION}-pre")
else()
set(GIT_VERSION "${CPACK_PACKAGE_VERSION}-release")
endif()
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -Wextra")
set(NDPID_DEFS -DJSMN_STATIC=1 -DJSMN_STRICT=1 -DJSMN_PARENT_LINKS=1)
set(NDPID_DEPS_INC "${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn"
"${CMAKE_SOURCE_DIR}/dependencies/uthash/src")
if(ENABLE_MEMORY_PROFILING)
message(WARNING "ENABLE_MEMORY_PROFILING should not be used in production environments.")
add_definitions("-DENABLE_MEMORY_PROFILING=1"
"-Duthash_malloc=nDPIsrvd_uthash_malloc"
"-Duthash_free=nDPIsrvd_uthash_free")
else()
set(NDPID_TEST_MPROF_DEFS "-DENABLE_MEMORY_PROFILING=1")
endif()
if(CMAKE_BUILD_TYPE STREQUAL "Debug" OR CMAKE_BUILD_TYPE STREQUAL "")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -O0 -g3 -fno-omit-frame-pointer -fno-inline")
endif()
if(ENABLE_SANITIZER AND ENABLE_SANITIZER_THREAD)
message(STATUS_FATAL "ENABLE_SANITIZER and ENABLE_SANITIZER_THREAD can not be used together!")
endif()
if(ENABLE_SANITIZER)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address -fsanitize=undefined -fsanitize=enum -fsanitize=leak")
set(BUILD_NDPI_CONFIGURE_OPTS "${BUILD_NDPI_CONFIGURE_OPTS} --with-sanitizer")
endif()
if(ENABLE_SANITIZER_THREAD)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=undefined -fsanitize=enum -fsanitize=thread")
message(FATAL_ERROR "ENABLE_SANITIZER and ENABLE_SANITIZER_THREAD can not be used together!")
endif()
if(BUILD_NDPI)
@@ -48,23 +207,38 @@ if(BUILD_NDPI)
ExternalProject_Add(
libnDPI
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/libnDPI
CONFIGURE_COMMAND git submodule update --init &&
${CMAKE_CURRENT_SOURCE_DIR}/libnDPI/autogen.sh
--prefix=${CMAKE_CURRENT_BINARY_DIR}/libnDPI
${BUILD_NDPI_CONFIGURE_OPTS}
BUILD_COMMAND make
DOWNLOAD_COMMAND ""
CONFIGURE_COMMAND env
CC=${CMAKE_C_COMPILER}
CXX=false
AR=${CMAKE_AR}
RANLIB=${CMAKE_RANLIB}
PKG_CONFIG=${PKG_CONFIG_EXECUTABLE}
CFLAGS=${CMAKE_C_FLAGS}
LDFLAGS=${CMAKE_MODULE_LINKER_FLAGS}
ADDITIONAL_ARGS=${NDPI_ADDITIONAL_ARGS}
MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}
DEST_INSTALL=${CMAKE_BINARY_DIR}/libnDPI
${CMAKE_CURRENT_SOURCE_DIR}/scripts/get-and-build-libndpi.sh
BUILD_BYPRODUCTS ${CMAKE_BINARY_DIR}/libnDPI/lib/libndpi.a
BUILD_COMMAND ""
INSTALL_COMMAND ""
BUILD_IN_SOURCE 1)
add_custom_target(clean-libnDPI
COMMAND rm -rf ${CMAKE_BINARY_DIR}/libnDPI ${CMAKE_BINARY_DIR}/libnDPI-prefix
)
add_custom_target(distclean-libnDPI
COMMAND cd ${CMAKE_SOURCE_DIR}/libnDPI && git clean -df . && git clean -dfX .
)
add_dependencies(distclean-libnDPI clean-libnDPI)
set(STATIC_LIBNDPI_INSTALLDIR "${CMAKE_BINARY_DIR}/libnDPI")
add_dependencies(nDPId libnDPI)
add_dependencies(nDPId-test libnDPI)
endif()
if(NOT STATIC_LIBNDPI_INSTALLDIR STREQUAL "" OR BUILD_NDPI)
option(NDPI_WITH_GCRYPT "Link static libndpi library against libgcrypt." ON)
option(NDPI_WITH_PCRE "Link static libndpi library against libpcre." OFF)
option(NDPI_WITH_MAXMINDDB "Link static libndpi library against libmaxminddb." OFF)
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI OR NDPI_NO_PKGCONFIG)
if(NDPI_WITH_GCRYPT)
find_package(GCRYPT "1.4.2" REQUIRED)
endif()
@@ -72,95 +246,171 @@ if(NOT STATIC_LIBNDPI_INSTALLDIR STREQUAL "" OR BUILD_NDPI)
if(NDPI_WITH_PCRE)
pkg_check_modules(PCRE REQUIRED libpcre>=8.39)
endif()
if(NDPI_WITH_MAXMINDDB)
pkg_check_modules(MAXMINDDB REQUIRED libmaxminddb)
endif()
endif()
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI)
add_definitions("-DLIBNDPI_STATIC=1")
set(STATIC_LIBNDPI_INC "${STATIC_LIBNDPI_INSTALLDIR}/include/ndpi")
set(STATIC_LIBNDPI_LIB "${STATIC_LIBNDPI_INSTALLDIR}/lib/libndpi.a")
if(STATIC_LIBNDPI_INSTALLDIR AND NOT BUILD_NDPI)
if(NOT EXISTS "${STATIC_LIBNDPI_INC}" OR NOT EXISTS "${STATIC_LIBNDPI_LIB}")
message(FATAL_ERROR "Include directory \"${STATIC_LIBNDPI_INC}\" or\n"
"static library \"${STATIC_LIBNDPI_LIB}\" does not exist!")
endif()
endif()
unset(DEFAULT_NDPI_INCLUDE CACHE)
else()
pkg_check_modules(NDPI REQUIRED libndpi>=3.5.0)
set(STATIC_LIBNDPI_INC "")
set(STATIC_LIBNDPI_LIB "")
if(NOT NDPI_NO_PKGCONFIG)
pkg_check_modules(NDPI REQUIRED libndpi>=4.5.0)
unset(STATIC_LIBNDPI_INC CACHE)
unset(STATIC_LIBNDPI_LIB CACHE)
endif()
set(DEFAULT_NDPI_INCLUDE ${NDPI_INCLUDE_DIRS})
endif()
find_package(PCAP "1.8.1" REQUIRED)
target_compile_options(nDPId PRIVATE ${MEMORY_PROFILING_CFLAGS} "-pthread")
target_include_directories(nDPId PRIVATE "${STATIC_LIBNDPI_INC}" "${NDPI_INCLUDEDIR}" "${NDPI_INCLUDEDIR}/ndpi")
target_compile_options(nDPId PRIVATE "-pthread")
target_compile_definitions(nDPId PRIVATE -D_GNU_SOURCE=1 -DGIT_VERSION=\"${GIT_VERSION}\" ${NDPID_DEFS} ${ZLIB_DEFS})
target_include_directories(nDPId PRIVATE "${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC})
target_link_libraries(nDPId "${STATIC_LIBNDPI_LIB}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}"
"${GCRYPT_LIBRARY}" "${PCAP_LIBRARY}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}"
"-pthread")
target_compile_options(nDPId PRIVATE ${MEMORY_PROFILING_CFLAGS})
target_include_directories(nDPIsrvd PRIVATE
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn"
"${CMAKE_SOURCE_DIR}/dependencies/uthash/src")
target_compile_definitions(nDPIsrvd PRIVATE -D_GNU_SOURCE=1 -DGIT_VERSION=\"${GIT_VERSION}\" ${NDPID_DEFS})
target_include_directories(nDPIsrvd PRIVATE ${NDPID_DEPS_INC})
target_link_libraries(nDPIsrvd "${pkgcfg_lib_GNUTLS_gnutls}")
target_include_directories(nDPId-test PRIVATE ${NDPID_DEPS_INC})
target_compile_options(nDPId-test PRIVATE "-Wno-unused-function" "-pthread")
target_compile_definitions(nDPId-test PRIVATE -D_GNU_SOURCE=1 -DNO_MAIN=1 -DGIT_VERSION=\"${GIT_VERSION}\"
${NDPID_DEFS} ${NDPID_TEST_MPROF_DEFS})
target_include_directories(nDPId-test PRIVATE
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn"
"${CMAKE_SOURCE_DIR}/dependencies/uthash/src")
target_compile_options(nDPId-test PRIVATE ${MEMORY_PROFILING_CFLAGS} "-Wno-unused-function" "-pthread")
target_include_directories(nDPId-test PRIVATE "${STATIC_LIBNDPI_INC}" "${NDPI_INCLUDEDIR}" "${NDPI_INCLUDEDIR}/ndpi")
target_compile_definitions(nDPId-test PRIVATE "-D_GNU_SOURCE=1" "-DNO_MAIN=1" "-Dsyslog=mock_syslog_stderr")
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" ${NDPID_DEPS_INC})
target_link_libraries(nDPId-test "${STATIC_LIBNDPI_LIB}" "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}"
"${GCRYPT_LIBRARY}" "${PCAP_LIBRARY}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}" "${pkgcfg_lib_ZLIB_z}"
"${pkgcfg_lib_GNUTLS_gnutls}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}" "${LIBM_LIB}"
"-pthread")
if(BUILD_EXAMPLES)
add_executable(nDPIsrvd-collectd examples/c-collectd/c-collectd.c)
target_compile_options(nDPIsrvd-collectd PRIVATE ${MEMORY_PROFILING_CFLAGS})
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-collectd libnDPI)
endif()
target_compile_definitions(nDPIsrvd-collectd PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-collectd PRIVATE
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn"
"${CMAKE_SOURCE_DIR}/dependencies/uthash/src")
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" ${NDPID_DEPS_INC})
add_executable(nDPIsrvd-captured examples/c-captured/c-captured.c utils.c)
target_compile_options(nDPIsrvd-captured PRIVATE ${MEMORY_PROFILING_CFLAGS})
if(BUILD_NDPI)
add_dependencies(nDPIsrvd-captured libnDPI)
endif()
target_compile_definitions(nDPIsrvd-captured PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-captured PRIVATE
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn"
"${CMAKE_SOURCE_DIR}/dependencies/uthash/src")
target_link_libraries(nDPIsrvd-captured "${PCAP_LIBRARY}")
"${STATIC_LIBNDPI_INC}" "${DEFAULT_NDPI_INCLUDE}" "${CMAKE_SOURCE_DIR}" ${NDPID_DEPS_INC})
target_link_libraries(nDPIsrvd-captured "${pkgcfg_lib_NDPI_ndpi}"
"${pkgcfg_lib_PCRE_pcre}" "${pkgcfg_lib_MAXMINDDB_maxminddb}"
"${GCRYPT_LIBRARY}" "${GCRYPT_ERROR_LIBRARY}" "${PCAP_LIBRARY}")
add_executable(nDPIsrvd-json-dump examples/c-json-stdout/c-json-stdout.c)
target_include_directories(nDPIsrvd-json-dump PRIVATE
"${CMAKE_SOURCE_DIR}"
"${CMAKE_SOURCE_DIR}/dependencies/jsmn")
target_compile_definitions(nDPIsrvd-json-dump PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-json-dump PRIVATE ${NDPID_DEPS_INC})
install(TARGETS nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-json-dump DESTINATION bin)
add_executable(nDPIsrvd-analysed examples/c-analysed/c-analysed.c utils.c)
target_compile_definitions(nDPIsrvd-analysed PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-analysed PRIVATE ${NDPID_DEPS_INC})
add_executable(nDPIsrvd-simple examples/c-simple/c-simple.c)
target_compile_definitions(nDPIsrvd-simple PRIVATE ${NDPID_DEFS})
target_include_directories(nDPIsrvd-simple PRIVATE ${NDPID_DEPS_INC})
if(ENABLE_COVERAGE)
add_dependencies(coverage nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-json-dump nDPIsrvd-simple)
endif()
install(TARGETS nDPIsrvd-analysed nDPIsrvd-collectd nDPIsrvd-captured nDPIsrvd-json-dump nDPIsrvd-simple DESTINATION bin)
install(FILES examples/c-collectd/plugin_nDPIsrvd.conf examples/c-collectd/rrdgraph.sh DESTINATION share/nDPId/nDPIsrvd-collectd)
install(DIRECTORY examples/c-collectd/www DESTINATION share/nDPId/nDPIsrvd-collectd)
endif()
if(ENABLE_SYSTEMD)
install(FILES packages/systemd/ndpisrvd.service DESTINATION lib/systemd/system)
install(FILES packages/systemd/ndpid@.service DESTINATION lib/systemd/system)
endif()
install(TARGETS nDPId DESTINATION sbin)
install(TARGETS nDPIsrvd nDPId-test DESTINATION bin)
install(FILES dependencies/nDPIsrvd.py DESTINATION share/nDPId)
install(FILES examples/py-flow-info/flow-info.py DESTINATION bin RENAME nDPIsrvd-flow-info.py)
if(BUILD_EXAMPLES)
install(FILES dependencies/nDPIsrvd.py examples/py-flow-dashboard/plotly_dash.py
DESTINATION share/nDPId)
install(FILES examples/py-flow-info/flow-info.py
DESTINATION bin RENAME nDPIsrvd-flow-info.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-flow-dashboard/flow-dash.py
DESTINATION bin RENAME nDPIsrvd-flow-dash.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-json-stdout/json-stdout.py
DESTINATION bin RENAME nDPIsrvd-json-stdout.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-schema-validation/py-schema-validation.py
DESTINATION bin RENAME nDPIsrvd-schema-validation.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-semantic-validation/py-semantic-validation.py
DESTINATION bin RENAME nDPIsrvd-semantic-validation.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
install(FILES examples/py-machine-learning/sklearn-random-forest.py
DESTINATION bin RENAME nDPIsrvd-sklearn.py
PERMISSIONS OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE)
endif()
install(FILES schema/error_event_schema.json schema/daemon_event_schema.json
schema/flow_event_schema.json schema/packet_event_schema.json DESTINATION share/nDPId/json-schema)
message(STATUS "--------------------------")
message(STATUS "nDPId GIT_VERSION........: ${GIT_VERSION}")
message(STATUS "Cross Compilation........: ${CMAKE_CROSSCOMPILING}")
message(STATUS "CMAKE_SYSTEM_NAME........: ${CMAKE_SYSTEM_NAME}")
message(STATUS "CMAKE_SYSTEM_PROCESSOR...: ${CMAKE_SYSTEM_PROCESSOR}")
message(STATUS "CMAKE_BUILD_TYPE.........: ${CMAKE_BUILD_TYPE}")
message(STATUS "CMAKE_C_FLAGS............: ${CMAKE_C_FLAGS}")
if(ENABLE_MEMORY_PROFILING)
message(STATUS "MEMORY_PROFILING_CFLAGS..: ${MEMORY_PROFILING_CFLAGS}")
endif()
string(REPLACE ";" " " PRETTY_NDPID_DEFS "${NDPID_DEFS}")
message(STATUS "NDPID_DEFS...............: ${PRETTY_NDPID_DEFS}")
message(STATUS "ENABLE_COVERAGE..........: ${ENABLE_COVERAGE}")
message(STATUS "ENABLE_SANITIZER.........: ${ENABLE_SANITIZER}")
message(STATUS "ENABLE_SANITIZER_THREAD..: ${ENABLE_SANITIZER_THREAD}")
message(STATUS "ENABLE_MEMORY_PROFILING..: ${ENABLE_MEMORY_PROFILING}")
if(NOT BUILD_NDPI)
message(STATUS "ENABLE_ZLIB..............: ${ENABLE_ZLIB}")
message(STATUS "ENABLE_SYSTEMD...........: ${ENABLE_SYSTEMD}")
message(STATUS "ENABLE_GNUTLS............: ${ENABLE_GNUTLS}")
if(STATIC_LIBNDPI_INSTALLDIR)
message(STATUS "STATIC_LIBNDPI_INSTALLDIR: ${STATIC_LIBNDPI_INSTALLDIR}")
if(NOT STATIC_LIBNDPI_INSTALLDIR STREQUAL "")
message(STATUS "`- STATIC_LIBNDPI_INC....: ${STATIC_LIBNDPI_INC}")
message(STATUS "`- STATIC_LIBNDPI_LIB....: ${STATIC_LIBNDPI_LIB}")
message(STATUS "`- NDPI_WITH_GCRYPT......: ${NDPI_WITH_GCRYPT}")
message(STATUS "`- NDPI_WITH_PCRE........: ${NDPI_WITH_PCRE}")
message(STATUS "`- NDPI_WITH_MAXMINDDB...: ${NDPI_WITH_MAXMINDDB}")
endif()
endif()
message(STATUS "BUILD_NDPI...............: ${BUILD_NDPI}")
if(BUILD_NDPI)
message(STATUS "NDPI_ADDITIONAL_ARGS.....: ${NDPI_ADDITIONAL_ARGS}")
endif()
message(STATUS "NDPI_NO_PKGCONFIG........: ${NDPI_NO_PKGCONFIG}")
message(STATUS "--------------------------")
if(STATIC_LIBNDPI_INSTALLDIR OR BUILD_NDPI OR NDPI_NO_PKGCONFIG)
message(STATUS "- STATIC_LIBNDPI_INC....: ${STATIC_LIBNDPI_INC}")
message(STATUS "- STATIC_LIBNDPI_LIB....: ${STATIC_LIBNDPI_LIB}")
message(STATUS "- NDPI_WITH_GCRYPT......: ${NDPI_WITH_GCRYPT}")
message(STATUS "- NDPI_WITH_PCRE........: ${NDPI_WITH_PCRE}")
message(STATUS "- NDPI_WITH_MAXMINDDB...: ${NDPI_WITH_MAXMINDDB}")
endif()
if(NOT STATIC_LIBNDPI_INSTALLDIR AND NOT BUILD_NDPI)
message(STATUS "- DEFAULT_NDPI_INCLUDE..: ${DEFAULT_NDPI_INCLUDE}")
endif()
if(NOT NDPI_NO_PKGCONFIG)
message(STATUS "- pkgcfg_lib_NDPI_ndpi..: ${pkgcfg_lib_NDPI_ndpi}")
endif()
message(STATUS "--------------------------")

674
COPYING Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

25
Dockerfile Normal file
View File

@@ -0,0 +1,25 @@
FROM ubuntu:22.10 as builder
WORKDIR /root
RUN apt-get -y update && apt-get install -y git cmake pkg-config libpcap-dev autoconf libtool
RUN git clone https://github.com/utoni/nDPId.git
#for dev, uncomment below
#RUN mkdir /root/nDPId
#COPY . /root/nDPId/
RUN cd nDPId && mkdir build && cd build && cmake .. -DBUILD_NDPI=ON && make
FROM ubuntu:22.10
WORKDIR /root
RUN apt-get -y update && apt-get -y install libpcap-dev
COPY --from=builder /root/nDPId/libnDPI/ /root/
COPY --from=builder /root/nDPId/build/nDPIsrvd /root/nDPId/build/nDPId /root/
#RUN echo "#!/bin/bash\n" \
# "/root/nDPIsrvd -d\n"\
# "/root/nDPId \n" > run.sh && cat run.sh && chmod +x run.sh
#ENTRYPOINT ["/root/run.sh"]

260
README.md
View File

@@ -1,67 +1,178 @@
# abstract
[![Build](https://github.com/utoni/nDPId/actions/workflows/build.yml/badge.svg)](https://github.com/utoni/nDPId/actions/workflows/build.yml)
[![Gitlab-CI](https://gitlab.com/utoni/nDPId/badges/main/pipeline.svg)](https://gitlab.com/utoni/nDPId/-/pipelines)
nDPId is a set of daemons and tools to capture, process and classify network flows.
It's only dependencies (besides a half-way modern c library and POSIX threads) are libnDPI (>= 3.6.0 or current github dev branch) and libpcap.
# References
The core daemon nDPId uses pthread but does use mutexes for performance reasons.
[ntop Webinar 2022](https://www.ntop.org/webinar/ntop-webinar-on-dec-14th-community-meeting-and-future-plans/)
# Disclaimer
Please respect&protect the privacy of others.
The purpose of this software is not to spy on others, but to detect network anomalies and malicious traffic.
# Abstract
nDPId is a set of daemons and tools to capture, process and classify network traffic.
It's minimal dependencies (besides a half-way modern c library and POSIX threads) are libnDPI (> 4.4.0 or current github dev branch) and libpcap.
The daemon `nDPId` is capable of multithreading for packet processing, but w/o mutexes for performance reasons.
Instead synchronization is achieved by a packet distribution mechanism.
To balance all workload to all threads (more or less) equally a hash value is calculated using the 5-tuple.
This value serves as unique identifier for the processing thread. Multithreaded packet processing has to be flow-stable.
To balance all workload to all threads (more or less) equally a unique identifier represented as hash value is calculated using a 3-tuple consisting of IPv4/IPv6 src/dst address, IP header value of the layer4 protocol and (for TCP/UDP) src/dst port. Other protocols e.g. ICMP/ICMPv6 are lacking relevance for DPI, thus nDPId does not distinguish between different ICMP/ICMPv6 flows coming from the same host. Saves memory and performance, but might change in the future.
nDPId uses libnDPI's JSON serialization to produce meaningful JSON output which it then sends to the nDPIsrvd for distribution.
High level applications can connect to nDPIsrvd to get the latest flow/packet events from nDPId.
`nDPId` uses libnDPI's JSON serialization interface to generate a JSON strings for each event it receive from the library and which it then sends out to a UNIX-socket (default: /tmp/ndpid-collector.sock ). From such a socket, `nDPIsrvd` (or other custom applications) can retrieve incoming JSON-messages and further proceed working/distributing messages to higher-level applications.
Unfortunately nDPIsrvd does currently not support any encryption/authentication for TCP connections.
Unfortunately `nDPIsrvd` does currently not support any encryption/authentication for TCP connections (TODO!).
# architecture
# Architecture
This project uses some kind of microservice architecture.
```text
_______________________ __________________________
| "producer" | | "consumer" |
connect to UNIX socket [1] connect to UNIX/TCP socket [2]
_______________________ | | __________________________
| "producer" |___| |___| "consumer" |
|---------------------| _____________________________ |------------------------|
| | | nDPIsrvd | | |
| nDPId --- Thread 1 >| ---> |> | <| <--- |< example/c-json-stdout |
| (eth0) `- Thread 2 >| ---> |> collector | distributor <| <--- |________________________|
| `- Thread N >| ---> |> >>> forward >>> <| <--- | |
| nDPId --- Thread 1 >| ---> |> | <| ---> |< example/c-json-stdout |
| (eth0) `- Thread 2 >| ---> |> collector | distributor <| ---> |________________________|
| `- Thread N >| ---> |> >>> forward >>> <| ---> | |
|_____________________| ^ |____________|______________| ^ |< example/py-flow-info |
| | | | |________________________|
| nDPId --- Thread 1 >| `- connect to UNIX socket | | |
| (eth1) `- Thread 2 >| `- sends serialized data | |< example/... |
| `- Thread N >| | |________________________|
|_____________________| |
`- connect to UNIX/TCP socket
`- receives serialized data
| nDPId --- Thread 1 >| `- send serialized data [1] | | |
| (eth1) `- Thread 2 >| | |< example/... |
| `- Thread N >| receive serialized data [2] -' |________________________|
|_____________________|
```
where:
It doesn't use a producer/consumer design pattern, so the wording is not precise.
* `nDPId` capture traffic, extract traffic data (with libnDPI) and send a JSON-serialized output stream to an already existing UNIX-socket;
* `nDPIsrvd`:
# JSON TCP protocol
* create and manage an "incoming" UNIX-socket (ref [1] above), to fetch data from a local `nDPId`;
* apply a buffering logic to received data;
* create and manage an "outgoing" UNIX or TCP socket (ref [2] above) to relay matched events
to connected clients
* `consumers` are common/custom applications being able to receive selected flows/events, via both UNIX-socket or TCP-socket.
# JSON stream format
JSON messages streamed by both `nDPId` and `nDPIsrvd` are presented with:
* a 5-digit-number describing (as decimal number) of the **entire** JSON string including the newline `\n` at the end;
* the JSON messages
All JSON strings sent need to be in the following format:
```text
[5-digit-number][JSON string]
```
## Example:
as with the following example:
```text
00015{"key":"value"}
01223{"flow_event_id":7,"flow_event_name":"detection-update","thread_id":12,"packet_id":307,"source":"wlan0",[...]}
00458{"packet_event_id":2,"packet_event_name":"packet-flow","thread_id":11,"packet_id":324,"source":"wlan0",[...]]}
00572{"flow_event_id":1,"flow_event_name":"new","thread_id":11,"packet_id":324,"source":"wlan0",[...]}
```
where `00015` describes the length of a **complete** JSON string.
TODO: Describe data format via JSON schema.
The full stream of `nDPId` generated JSON-events can be retrieved directly from `nDPId`, without relying on `nDPIsrvd`, by providing a properly managed UNIX-socket.
# build (CMake)
Technical details about JSON-messages format can be obtained from related `.schema` file included in the `schema` directory
# Events
`nDPId` generates JSON strings whereas each string is assigned to a certain event.
Those events specify the contents (key-value-pairs) of the JSON string.
They are divided into four categories, each with a number of subevents.
## Error Events
They are 17 distinct events, indicating that layer2 or layer3 packet processing failed or not enough flow memory available:
1. Unknown datalink layer packet
2. Unknown L3 protocol
3. Unsupported datalink layer
4. Packet too short
5. Unknown packet type
6. Packet header invalid
7. IP4 packet too short
8. Packet smaller than IP4 header:
9. nDPI IPv4/L4 payload detection failed
10. IP6 packet too short
11. Packet smaller than IP6 header
12. nDPI IPv6/L4 payload detection failed
13. TCP packet smaller than expected
14. UDP packet smaller than expected
15. Captured packet size is smaller than expected packet size
16. Max flows to track reached
17. Flow memory allocation failed
Detailed JSON-schema is available [here](schema/error_event_schema.json)
## Daemon Events
There are 4 distinct events indicating startup/shutdown or status events as well as a reconnect event if there was a previous connection failure (collector):
1. init: `nDPId` startup
2. reconnect: (UNIX) socket connection lost previously and was established again
3. shutdown: `nDPId` terminates gracefully
4. status: statistics about the daemon itself e.g. memory consumption, zLib compressions (if enabled)
Detailed JSON-schema is available [here](schema/daemon_event_schema.json)
## Packet Events
There are 2 events containing base64 encoded packet payload either belonging to a flow or not:
1. packet: does not belong to any flow
2. packet-flow: does belong to a flow e.g. TCP/UDP or ICMP
Detailed JSON-schema is available [here](schema/packet_event_schema.json)
## Flow Events
There are 9 distinct events related to a flow:
1. new: a new TCP/UDP/ICMP flow seen which will be tracked
2. end: a TCP connections terminates
3. idle: a flow timed out, because there was no packet on the wire for a certain amount of time
4. update: inform nDPIsrvd or other apps about a long-lasting flow, whose detection was finished a long time ago but is still active
5. analyse: provide some information about extracted features of a flow (Experimental; disabled per default, enable with `-A`)
6. guessed: `libnDPI` was not able to reliable detect a layer7 protocol and falls back to IP/Port based detection
7. detected: `libnDPI` sucessfully detected a layer7 protocol
8. detection-update: `libnDPI` dissected more layer7 protocol data (after detection already done)
9. not-detected: neither detected nor guessed
Detailed JSON-schema is available [here](schema/flow_event_schema.json). Also, a graphical representation of *Flow Events* timeline is available [here](schema/flow_events_diagram.png).
# Flow States
A flow can have three different states while it is been tracked by `nDPId`.
1. skipped: the flow will be tracked, but no detection will happen to safe memory, see command line argument `-I` and `-E`
2. finished: detection finished and the memory used for the detection is free'd
3. info: detection is in progress and all flow memory required for `libnDPI` is allocated (this state consumes most memory)
# Build (CMake)
`nDPId` build system is based on [CMake](https://cmake.org/)
```shell
git clone https://github.com/utoni/nDPId.git
[...]
cd ndpid
mkdir build
cd build
cmake ..
[...]
make
```
or
see below for a full/test live-session
![](examples/ndpid_install_and_run.gif)
Based on your building environment and/or desiderata, you could need:
```shell
mkdir build
@@ -88,7 +199,7 @@ cd build
cmake .. -DSTATIC_LIBNDPI_INSTALLDIR=[path/to/your/libnDPI/installdir] -DNDPI_WITH_GCRYPT=ON -DNDPI_WITH_PCRE=OFF -DNDPI_WITH_MAXMINDDB=OFF
```
Or if this is all too much for you, let CMake do it for you:
Or let a shell script do the work for you:
```shell
mkdir build
@@ -96,14 +207,48 @@ cd build
cmake .. -DBUILD_NDPI=ON
```
The CMake cache variable `-DBUILD_NDPI=ON` builds a version of `libnDPI` residing as git submodule in this repository.
# run
Generate a nDPId compatible JSON dump:
As mentioned above, in order to run `nDPId` a UNIX-socket need to be provided in order to stream our related JSON-data.
Such a UNIX-socket can be provided by both the included `nDPIsrvd` daemon, or, if you simply need a quick check, with the [ncat](https://nmap.org/book/ncat-man.html) utility, with a simple `ncat -U /tmp/listen.sock -l -k`. Remember that OpenBSD `netcat` is not able to handle multiple connections reliably.
Once the socket is ready, you can run `nDPId` capturing and analyzing your own traffic, with something similar to:
Of course, both `ncat` and `nDPId` need to point to the same UNIX-socket (`nDPId` provides the `-c` option, exactly for this. As a default, `nDPId` refer to `/tmp/ndpid-collector.sock`, and the same default-path is also used by `nDPIsrvd` as for the incoming socket).
You also need to provide `nDPId` some real-traffic. You can capture your own traffic, with something similar to:
```shell
socat -u UNIX-Listen:/tmp/listen.sock,fork - # does the same as `ncat`
sudo chown nobody:nobody /tmp/listen.sock # default `nDPId` user/group, see `-u` and `-g`
sudo ./nDPId -c /tmp/listen.sock -l
```
`nDPId` supports also UDP collector endpoints:
```shell
nc -d -u 127.0.0.1 7000 -l -k
sudo ./nDPId -c 127.0.0.1:7000 -l
```
or you can generate a nDPId-compatible JSON dump with:
```shell
./nDPId-test [path-to-a-PCAP-file]
```
You can also automatically fire both `nDPId` and `nDPIsrvd` automatically, with:
Daemons:
```shell
make -C [path-to-a-build-dir] daemon
```
Or you can proceed with a manual approach with:
```shell
./nDPIsrvd -d
sudo ./nDPId -d
@@ -127,21 +272,56 @@ or
or anything below `./examples`.
# nDPId tuning
It is possible to change `nDPId` internals w/o recompiling by using `-o subopt=value`.
But be careful: changing the default values may render `nDPId` useless and is not well tested.
Suboptions for `-o`:
Format: `subopt` (unit, comment): description
* `max-flows-per-thread` (N, caution advised): affects max. memory usage
* `max-idle-flows-per-thread` (N, safe): max. allowed idle flows which memory get's free'd after `flow-scan-interval`
* `max-reader-threads` (N, safe): amount of packet processing threads, every thread can have a max. of `max-flows-per-thread` flows
* `daemon-status-interval` (ms, safe): specifies how often daemon event `status` will be generated
* `compression-scan-interval` (ms, untested): specifies how often `nDPId` should scan for inactive flows ready for compression
* `compression-flow-inactivity` (ms, untested): the earliest period of time that must elapse before `nDPId` may consider compressing a flow that did neither send nor receive any data
* `flow-scan-interval` (ms, safe): min. amount of time after which `nDPId` will scan for idle or long-lasting flows
* `generic-max-idle-time` (ms, untested): time after which a non TCP/UDP/ICMP flow will time out
* `icmp-max-idle-time` (ms, untested): time after which an ICMP flow will time out
* `udp-max-idle-time` (ms, caution advised): time after which an UDP flow will time out
* `tcp-max-idle-time` (ms, caution advised): time after which a TCP flow will time out
* `tcp-max-post-end-flow-time` (ms, caution advised): a TCP flow that received a FIN or RST will wait that amount of time before flow tracking will be stopped and the flow memory free'd
* `max-packets-per-flow-to-send` (N, safe): max. `packet-flow` events that will be generated for the first N packets of each flow
* `max-packets-per-flow-to-process` (N, caution advised): max. packets that will be processed by `libnDPI`
* `max-packets-per-flow-to-analyze` (N, safe): max. packets to analyze before sending an `analyse` event, requires `-A`
# test
You may want to run some integration tests using pcap files from nDPI:
The recommended way to run integration / diff tests:
`./test/run_tests.sh /path/to/libnDPI/root/directory`
```shell
mkdir build
cd build
cmake .. -DBUILD_NDPI=ON
make nDPId-test test
```
Alternatively you can run some integration tests manually:
`./test/run_tests.sh [/path/to/libnDPI/root/directory] [/path/to/nDPId-test]`
e.g.:
`./test/run_tests.sh ${HOME}/git/nDPI`
`./test/run_tests.sh [${HOME}/git/nDPI] [${HOME}/git/nDPId/build/nDPId-test]`
Remember that all test results are tied to a specific libnDPI commit hash
as part of the `git submodule`. Using `test/run_tests.sh` for other commit hashes
will most likely result in PCAP diff's.
For out-of-source builds, you'll need to specify a path to nDPId-test as well with:
Why not use `examples/py-flow-dashboard/flow-dash.py` to visualize nDPId's output.
`/test/run_tests.sh /path/to/libnDPI/root/directory /path/to/nDPId-test-executable`
# Contributors
For in-source builds and if CMake was configured with BUILD_NDPI=ON you can just type:
`/test/run_tests.sh`
Special thanks to Damiano Verzulli ([@verzulli](https://github.com/verzulli)) from [GARRLab](https://www.garrlab.it) for providing server and test infrastructure.

View File

@@ -1,7 +1,5 @@
# TODOs
1. unify `struct io_buffer` from nDPIsrvd.c and `struct nDPIsrvd_buffer` from nDPIsrvd.h
2. improve nDPIsrvd buffer bloat handling (Do not fall back to blocking mode!)
3. improve UDP/TCP timeout handling by reading netfilter conntrack timeouts from /proc
4. detect interface / timeout changes and apply them to nDPId
5. implement AEAD crypto via libsodium (at least for TCP communication)
1. improve UDP/TCP timeout handling by reading netfilter conntrack timeouts from /proc (or just read conntrack table entries)
2. detect interface / timeout changes and apply them to nDPId
3. implement AEAD crypto via libsodium (at least for TCP communication)

View File

@@ -2,6 +2,7 @@
#define CONFIG_H 1
/* macros shared across multiple executables */
#define DEFAULT_CHUSER "nobody"
#define COLLECTOR_UNIX_SOCKET "/tmp/ndpid-collector.sock"
#define DISTRIBUTOR_UNIX_SOCKET "/tmp/ndpid-distributor.sock"
#define DISTRIBUTOR_HOST "127.0.0.1"
@@ -11,25 +12,42 @@
* NOTE: Buffer size needs to keep in sync with other implementations
* e.g. dependencies/nDPIsrvd.py
*/
#define NETWORK_BUFFER_MAX_SIZE 12288u /* 8192 + 4096 */
#define NETWORK_BUFFER_MAX_SIZE 33792u /* 8192 + 8192 + 8192 + 8192 + 1024 */
#define NETWORK_BUFFER_LENGTH_DIGITS 5u
#define NETWORK_BUFFER_LENGTH_DIGITS_STR "5"
#define TIME_S_TO_US(s) (s * 1000u * 1000u)
/* nDPId default config options */
#define nDPId_PIDFILE "/tmp/ndpid.pid"
#define nDPId_MAX_FLOWS_PER_THREAD 4096u
#define nDPId_MAX_IDLE_FLOWS_PER_THREAD 512u
#define nDPId_TICK_RESOLUTION 1000u
#define nDPId_MAX_IDLE_FLOWS_PER_THREAD (nDPId_MAX_FLOWS_PER_THREAD / 32u)
#define nDPId_MAX_READER_THREADS 32u
#define nDPId_IDLE_SCAN_PERIOD 10000u /* 10 sec */
#define nDPId_IDLE_TIME 600000u /* 600 sec */
#define nDPId_TCP_POST_END_FLOW_TIME 60000u /* 60 sec */
#define nDPId_ERROR_EVENT_THRESHOLD_N 16u
#define nDPId_ERROR_EVENT_THRESHOLD_TIME TIME_S_TO_US(10u) /* 10 sec */
#define nDPId_DAEMON_STATUS_INTERVAL TIME_S_TO_US(600u) /* 600 sec */
#define nDPId_MEMORY_PROFILING_LOG_INTERVAL TIME_S_TO_US(5u) /* 5 sec */
#define nDPId_COMPRESSION_SCAN_INTERVAL TIME_S_TO_US(20u) /* 20 sec */
#define nDPId_COMPRESSION_FLOW_INACTIVITY TIME_S_TO_US(30u) /* 30 sec */
#define nDPId_FLOW_SCAN_INTERVAL TIME_S_TO_US(10u) /* 10 sec */
#define nDPId_GENERIC_IDLE_TIME TIME_S_TO_US(600u) /* 600 sec */
#define nDPId_ICMP_IDLE_TIME TIME_S_TO_US(120u) /* 120 sec */
#define nDPId_TCP_IDLE_TIME TIME_S_TO_US(7440u) /* 7440 sec */
#define nDPId_UDP_IDLE_TIME TIME_S_TO_US(180u) /* 180 sec */
#define nDPId_TCP_POST_END_FLOW_TIME TIME_S_TO_US(120u) /* 120 sec */
#define nDPId_THREAD_DISTRIBUTION_SEED 0x03dd018b
#define nDPId_PACKETS_PLEN_MAX 8192u /* 8kB */
#define nDPId_PACKETS_PER_FLOW_TO_SEND 15u
#define nDPId_PACKETS_PER_FLOW_TO_PROCESS 255u
#define nDPId_PACKETS_PER_FLOW_TO_PROCESS NDPI_DEFAULT_MAX_NUM_PKTS_PER_FLOW_TO_DISSECT
#define nDPId_PACKETS_PER_FLOW_TO_ANALYZE 32u
#define nDPId_ANALYZE_PLEN_MAX 1504u
#define nDPId_ANALYZE_PLEN_BIN_LEN 32u
#define nDPId_ANALYZE_PLEN_NUM_BINS 48u
#define nDPId_FLOW_STRUCT_SEED 0x5defc104
/* nDPIsrvd default config options */
#define nDPIsrvd_PIDFILE "/tmp/ndpisrvd.pid"
#define nDPIsrvd_MAX_REMOTE_DESCRIPTORS 128
#define nDPIsrvd_MAX_WRITE_BUFFERS 1024
#endif

View File

@@ -89,7 +89,7 @@ jsmn_parser p;
jsmntok_t t[128]; /* We expect no more than 128 JSON tokens */
jsmn_init(&p);
r = jsmn_parse(&p, s, strlen(s), t, 128);
r = jsmn_parse(&p, s, strlen(s), t, 128); // "s" is the char array holding the json content
```
Since jsmn is a single-header, header-only library, for more complex use cases
@@ -113,10 +113,10 @@ Token types are described by `jsmntype_t`:
typedef enum {
JSMN_UNDEFINED = 0,
JSMN_OBJECT = 1,
JSMN_ARRAY = 2,
JSMN_STRING = 3,
JSMN_PRIMITIVE = 4
JSMN_OBJECT = 1 << 0,
JSMN_ARRAY = 1 << 1,
JSMN_STRING = 1 << 2,
JSMN_PRIMITIVE = 1 << 3
} jsmntype_t;
**Note:** Unlike JSON data types, primitive tokens are not divided into

View File

@@ -45,10 +45,10 @@ extern "C" {
*/
typedef enum {
JSMN_UNDEFINED = 0,
JSMN_OBJECT = 1,
JSMN_ARRAY = 2,
JSMN_STRING = 3,
JSMN_PRIMITIVE = 4
JSMN_OBJECT = 1 << 0,
JSMN_ARRAY = 1 << 1,
JSMN_STRING = 1 << 2,
JSMN_PRIMITIVE = 1 << 3
} jsmntype_t;
enum jsmnerr {

1351
dependencies/nDPIsrvd.h vendored

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
import argparse
import array
import base64
import json
import re
import os
@@ -17,29 +16,25 @@ except ImportError:
sys.stderr.write('Python module colorama not found, using fallback.\n')
USE_COLORAMA=False
try:
import scapy.all
except ImportError:
sys.stderr.write('Python module scapy not found, PCAP generation will fail!\n')
DEFAULT_HOST = '127.0.0.1'
DEFAULT_PORT = 7000
DEFAULT_UNIX = '/tmp/ndpid-distributor.sock'
NETWORK_BUFFER_MIN_SIZE = 6 # NETWORK_BUFFER_LENGTH_DIGITS + 1
NETWORK_BUFFER_MAX_SIZE = 12288 # Please keep this value in sync with the one in config.h
NETWORK_BUFFER_MAX_SIZE = 33792 # Please keep this value in sync with the one in config.h
nDPId_PACKETS_PLEN_MAX = 8192 # Please keep this value in sync with the one in config.h
PKT_TYPE_ETH_IP4 = 0x0800
PKT_TYPE_ETH_IP6 = 0x86DD
class TermColor:
HINT = '\033[33m'
HINT = '\033[33m'
WARNING = '\033[93m'
FAIL = '\033[91m'
BOLD = '\033[1m'
END = '\033[0m'
BLINK = '\x1b[5m'
FAIL = '\033[91m'
BOLD = '\033[1m'
END = '\033[0m'
BLINK = '\x1b[5m'
if USE_COLORAMA is True:
COLOR_TUPLES = [ (Fore.BLUE, [Back.RED, Back.MAGENTA, Back.WHITE]),
@@ -57,6 +52,17 @@ class TermColor:
(Fore.LIGHTWHITE_EX, [Back.LIGHTBLACK_EX, Back.BLACK]),
(Fore.LIGHTYELLOW_EX, [Back.LIGHTRED_EX, Back.RED]) ]
@staticmethod
def disableColor():
TermColor.HINT = ''
TermColor.WARNING = ''
TermColor.FAIL = ''
TermColor.BOLD = ''
TermColor.END = ''
TermColor.BLINK = ''
global USE_COLORAMA
USE_COLORAMA = False
@staticmethod
def calcColorHash(string):
h = 0
@@ -74,6 +80,7 @@ class TermColor:
@staticmethod
def setColorByString(string):
global USE_COLORAMA
if USE_COLORAMA is True:
fg_color, bg_color = TermColor.getColorsByHash(string)
color_hash = TermColor.calcColorHash(string)
@@ -81,37 +88,206 @@ class TermColor:
else:
return '{}{}{}'.format(TermColor.BOLD, string, TermColor.END)
class ThreadData:
pass
class Instance:
def __init__(self, alias, source):
self.alias = str(alias)
self.source = str(source)
self.flows = dict()
self.thread_data = dict()
def __str__(self):
return '<%s.%s object at %s with alias %s, source %s>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self)),
self.alias,
self.source
)
def getThreadData(self, thread_id):
if thread_id not in self.thread_data:
return None
return self.thread_data[thread_id]
def getThreadDataFromJSON(self, json_dict):
if 'thread_id' not in json_dict:
return None
return self.getThreadData(json_dict['thread_id'])
def getMostRecentFlowTime(self, thread_id):
return self.thread_data[thread_id].most_recent_flow_time
def setMostRecentFlowTime(self, thread_id, most_recent_flow_time):
if thread_id in self.thread_data:
return self.thread_data[thread_id]
self.thread_data[thread_id] = ThreadData()
self.thread_data[thread_id].most_recent_flow_time = most_recent_flow_time
return self.thread_data[thread_id]
def getMostRecentFlowTimeFromJSON(self, json_dict):
if 'thread_id' not in json_dict:
return 0
return self.getThreadData(json_dict['thread_id']).most_recent_flow_time
def setMostRecentFlowTimeFromJSON(self, json_dict):
if 'thread_id' not in json_dict:
return
thread_id = json_dict['thread_id']
if 'thread_ts_usec' in json_dict:
mrtf = self.getMostRecentFlowTime(thread_id) if thread_id in self.thread_data else 0
self.setMostRecentFlowTime(thread_id, max(json_dict['thread_ts_usec'], mrtf))
class Flow:
flow_id = -1
def __init__(self, flow_id, thread_id):
self.flow_id = flow_id
self.thread_id = thread_id
self.flow_last_seen = -1
self.flow_idle_time = -1
self.cleanup_reason = -1
def __str__(self):
return '<%s.%s object at %s with flow id %d>' % (
self.__class__.__module__,
self.__class__.__name__,
hex(id(self)),
self.flow_id
)
class FlowManager:
def __init__(self):
self.__flows = dict()
CLEANUP_REASON_INVALID = 0
CLEANUP_REASON_DAEMON_INIT = 1 # can happen if kill -SIGKILL $(pidof nDPId) or restart after SIGSEGV
CLEANUP_REASON_DAEMON_SHUTDOWN = 2 # graceful shutdown e.g. kill -SIGTERM $(pidof nDPId)
CLEANUP_REASON_FLOW_END = 3
CLEANUP_REASON_FLOW_IDLE = 4
CLEANUP_REASON_FLOW_TIMEOUT = 5 # nDPId died a long time ago w/o restart?
CLEANUP_REASON_APP_SHUTDOWN = 6 # your python app called FlowManager.doShutdown()
def __buildFlowKey(self, json_dict):
if 'flow_id' not in json_dict or \
'alias' not in json_dict or \
def __init__(self):
self.instances = dict()
def getInstance(self, json_dict):
if 'alias' not in json_dict or \
'source' not in json_dict:
return None
return str(json_dict['alias']) + str(json_dict['source']) + str(json_dict['flow_id'])
alias = json_dict['alias']
source = json_dict['source']
def getFlow(self, json_dict):
event = json_dict['flow_event_name'].lower() if 'flow_event_name' in json_dict else ''
flow_key = self.__buildFlowKey(json_dict)
flow = None
if alias not in self.instances:
self.instances[alias] = dict()
if source not in self.instances[alias]:
self.instances[alias][source] = dict()
self.instances[alias][source] = Instance(alias, source)
if flow_key is None:
self.instances[alias][source].setMostRecentFlowTimeFromJSON(json_dict)
return self.instances[alias][source]
@staticmethod
def getLastPacketTime(instance, flow_id, json_dict):
return max(int(json_dict['flow_src_last_pkt_time']), int(json_dict['flow_dst_last_pkt_time']), instance.flows[flow_id].flow_last_seen)
def getFlow(self, instance, json_dict):
if 'flow_id' not in json_dict:
return None
if flow_key not in self.__flows:
self.__flows[flow_key] = Flow()
self.__flows[flow_key].flow_id = int(json_dict['flow_id'])
flow = self.__flows[flow_key]
if event == 'end' or event == 'idle':
flow = self.__flows[flow_key]
del self.__flows[flow_key]
return flow
flow_id = int(json_dict['flow_id'])
if flow_id in instance.flows:
instance.flows[flow_id].flow_last_seen = FlowManager.getLastPacketTime(instance, flow_id, json_dict)
instance.flows[flow_id].flow_idle_time = int(json_dict['flow_idle_time'])
return instance.flows[flow_id]
thread_id = int(json_dict['thread_id'])
instance.flows[flow_id] = Flow(flow_id, thread_id)
instance.flows[flow_id].flow_last_seen = FlowManager.getLastPacketTime(instance, flow_id, json_dict)
instance.flows[flow_id].flow_idle_time = int(json_dict['flow_idle_time'])
instance.flows[flow_id].cleanup_reason = FlowManager.CLEANUP_REASON_INVALID
return instance.flows[flow_id]
def getFlowsToCleanup(self, instance, json_dict):
flows = dict()
if 'daemon_event_name' in json_dict:
if json_dict['daemon_event_name'].lower() == 'init' or \
json_dict['daemon_event_name'].lower() == 'shutdown':
# invalidate all existing flows with that alias/source/thread_id
for flow_id in instance.flows:
flow = instance.flows[flow_id]
if flow.thread_id != int(json_dict['thread_id']):
continue
if json_dict['daemon_event_name'].lower() == 'init':
flow.cleanup_reason = FlowManager.CLEANUP_REASON_DAEMON_INIT
else:
flow.cleanup_reason = FlowManager.CLEANUP_REASON_DAEMON_SHUTDOWN
flows[flow_id] = flow
for flow_id in flows:
del instance.flows[flow_id]
if len(instance.flows) == 0:
del self.instances[instance.alias][instance.source]
elif 'flow_event_name' in json_dict and \
(json_dict['flow_event_name'].lower() == 'end' or \
json_dict['flow_event_name'].lower() == 'idle' or \
json_dict['flow_event_name'].lower() == 'guessed' or \
json_dict['flow_event_name'].lower() == 'not-detected' or \
json_dict['flow_event_name'].lower() == 'detected'):
flow_id = json_dict['flow_id']
if json_dict['flow_event_name'].lower() == 'end':
instance.flows[flow_id].cleanup_reason = FlowManager.CLEANUP_REASON_FLOW_END
elif json_dict['flow_event_name'].lower() == 'idle':
instance.flows[flow_id].cleanup_reason = FlowManager.CLEANUP_REASON_FLOW_IDLE
# TODO: Flow Guessing/Detection can happen right before an idle event.
# We need to prevent that it results in a CLEANUP_REASON_FLOW_TIMEOUT.
# This may cause inconsistency and needs to be handled in another way.
if json_dict['flow_event_name'].lower() != 'guessed' and \
json_dict['flow_event_name'].lower() != 'not-detected' and \
json_dict['flow_event_name'].lower() != 'detected':
flows[flow_id] = instance.flows.pop(flow_id)
elif 'flow_last_seen' in json_dict:
if int(json_dict['flow_last_seen']) + int(json_dict['flow_idle_time']) < \
instance.getMostRecentFlowTimeFromJSON(json_dict):
flow_id = json_dict['flow_id']
instance.flows[flow_id].cleanup_reason = FlowManager.CLEANUP_REASON_FLOW_TIMEOUT
flows[flow_id] = instance.flows.pop(flow_id)
return flows
def doShutdown(self):
flows = dict()
for alias in self.instances:
for source in self.instances[alias]:
for flow_id in self.instances[alias][source].flows:
flow = self.instances[alias][source].flows[flow_id]
flow.cleanup_reason = FlowManager.CLEANUP_REASON_APP_SHUTDOWN
flows[flow_id] = flow
del self.instances
return flows
def verifyFlows(self):
invalid_flows = list()
for alias in self.instances:
for source in self.instances[alias]:
for flow_id in self.instances[alias][source].flows:
thread_id = self.instances[alias][source].flows[flow_id].thread_id
if self.instances[alias][source].flows[flow_id].flow_last_seen + \
self.instances[alias][source].flows[flow_id].flow_idle_time < \
self.instances[alias][source].getMostRecentFlowTime(thread_id):
invalid_flows += [flow_id]
return invalid_flows
class nDPIsrvdException(Exception):
UNSUPPORTED_ADDRESS_TYPE = 1
@@ -119,6 +295,7 @@ class nDPIsrvdException(Exception):
SOCKET_CONNECTION_BROKEN = 3
INVALID_LINE_RECEIVED = 4
CALLBACK_RETURNED_FALSE = 5
SOCKET_TIMEOUT = 6
def __init__(self, etype):
self.etype = etype
@@ -159,10 +336,17 @@ class CallbackReturnedFalse(nDPIsrvdException):
def __str__(self):
return 'Callback returned False, abort.'
class SocketTimeout(nDPIsrvdException):
def __init__(self):
super().__init__(nDPIsrvdException.SOCKET_TIMEOUT)
def __str__(self):
return 'Socket timeout.'
class nDPIsrvdSocket:
def __init__(self):
self.sock_family = None
self.flow_mgr = FlowManager()
self.received_bytes = 0
def connect(self, addr):
if type(addr) is tuple:
@@ -178,6 +362,10 @@ class nDPIsrvdSocket:
self.msglen = 0
self.digitlen = 0
self.lines = []
self.failed_lines = []
def timeout(self, timeout):
self.sock.settimeout(timeout)
def receive(self):
if len(self.buffer) == NETWORK_BUFFER_MAX_SIZE:
@@ -189,6 +377,11 @@ class nDPIsrvdSocket:
except ConnectionResetError:
connection_finished = True
recvd = bytes()
except TimeoutError:
raise SocketTimeout()
except socket.timeout:
raise SocketTimeout()
if len(recvd) == 0:
connection_finished = True
@@ -212,6 +405,7 @@ class nDPIsrvdSocket:
self.lines += [(recvd,self.msglen,self.digitlen)]
new_data_avail = True
self.received_bytes += self.msglen + self.digitlen
self.msglen = 0
self.digitlen = 0
@@ -220,21 +414,46 @@ class nDPIsrvdSocket:
return new_data_avail
def parse(self, callback, global_user_data):
def parse(self, callback_json, callback_flow_cleanup, global_user_data):
retval = True
index = 0
for received_json_line in self.lines:
json_dict = json.loads(received_json_line[0].decode('ascii', errors='replace'), strict=True)
if callback(json_dict, self.flow_mgr.getFlow(json_dict), global_user_data) is not True:
retval = False
break
index += 1
self.lines = self.lines[index:]
for received_line in self.lines:
try:
json_dict = json.loads(received_line[0].decode('ascii', errors='replace'), strict=True)
except json.decoder.JSONDecodeError as e:
json_dict = dict()
self.failed_lines += [received_line]
self.lines = self.lines[1:]
raise(e)
instance = self.flow_mgr.getInstance(json_dict)
if instance is None:
self.failed_lines += [received_line]
retval = False
continue
try:
if callback_json(json_dict, instance, self.flow_mgr.getFlow(instance, json_dict), global_user_data) is not True:
self.failed_lines += [received_line]
retval = False
except Exception as e:
self.failed_lines += [received_line]
self.lines = self.lines[1:]
raise(e)
for _, flow in self.flow_mgr.getFlowsToCleanup(instance, json_dict).items():
if callback_flow_cleanup is None:
pass
elif callback_flow_cleanup(instance, flow, global_user_data) is not True:
self.failed_lines += [received_line]
self.lines = self.lines[1:]
retval = False
self.lines = self.lines[1:]
return retval
def loop(self, callback, global_user_data):
def loop(self, callback_json, callback_flow_cleanup, global_user_data):
throw_ex = None
while True:
@@ -244,130 +463,37 @@ class nDPIsrvdSocket:
except Exception as err:
throw_ex = err
if self.parse(callback, global_user_data) is False:
if self.parse(callback_json, callback_flow_cleanup, global_user_data) is False:
raise CallbackReturnedFalse()
if throw_ex is not None:
raise throw_ex
class PcapPacket:
def __init__(self):
self.pktdump = None
self.flow_id = 0
self.packets = []
self.__suffix = ''
self.__dump = False
self.__dumped = False
def shutdown(self):
return self.flow_mgr.doShutdown().items()
@staticmethod
def isInitialized(current_flow):
return current_flow is not None and hasattr(current_flow, 'pcap_packet')
def verify(self):
if len(self.failed_lines) > 0:
raise nDPIsrvdException('Failed lines > 0: {}'.format(len(self.failed_lines)))
return self.flow_mgr.verifyFlows()
@staticmethod
def handleJSON(json_dict, current_flow):
if 'flow_event_name' in json_dict:
if json_dict['flow_event_name'] == 'new':
current_flow.pcap_packet = PcapPacket()
current_flow.pcap_packet.current_packet = 0
current_flow.pcap_packet.max_packets = json_dict['flow_max_packets']
current_flow.pcap_packet.flow_id = json_dict['flow_id']
elif PcapPacket.isInitialized(current_flow) is not True:
pass
elif json_dict['flow_event_name'] == 'end' or json_dict['flow_event_name'] == 'idle':
try:
current_flow.pcap_packet.fin()
except RuntimeError:
pass
elif PcapPacket.isInitialized(current_flow) is True and \
('packet_event_name' in json_dict and json_dict['packet_event_name'] == 'packet-flow' and current_flow.pcap_packet.flow_id > 0) or \
('packet_event_name' in json_dict and json_dict['packet_event_name'] == 'packet' and 'pkt' in json_dict):
buffer_decoded = base64.b64decode(json_dict['pkt'], validate=True)
current_flow.pcap_packet.packets += [ ( buffer_decoded, json_dict['pkt_type'], json_dict['pkt_l3_offset'] ) ]
current_flow.pcap_packet.current_packet += 1
if current_flow.pcap_packet.current_packet != int(json_dict['flow_packet_id']):
raise RuntimeError('Packet IDs not in sync (local: {}, remote: {}).'.format(current_flow.pcap_packet.current_packet, int(json_dict['flow_packet_id'])))
@staticmethod
def getIp(packet):
if packet[1] == PKT_TYPE_ETH_IP4:
return scapy.all.IP(packet[0][packet[2]:])
elif packet[1] == PKT_TYPE_ETH_IP6:
return scapy.all.IPv6(packet[0][packet[2]:])
else:
raise RuntimeError('packet type unknown: {}'.format(packet[1]))
@staticmethod
def getTCPorUDP(packet):
p = PcapPacket.getIp(packet)
if p.haslayer(scapy.all.TCP):
return p.getlayer(scapy.all.TCP)
elif p.haslayer(scapy.all.UDP):
return p.getlayer(scapy.all.UDP)
else:
return None
def setSuffix(self, filename_suffix):
self.__suffix = filename_suffix
def doDump(self):
self.__dump = True
def fin(self):
if self.__dumped is True:
raise RuntimeError('Flow {} already dumped.'.format(self.flow_id))
if self.__dump is False:
raise RuntimeError('Flow {} should not be dumped.'.format(self.flow_id))
emptyTCPorUDPcount = 0;
for packet in self.packets:
p = PcapPacket.getTCPorUDP(packet)
if p is not None:
if p.haslayer(scapy.all.Padding) and len(p.payload) - len(p[scapy.all.Padding]) == 0:
emptyTCPorUDPcount += 1
elif len(p.payload) == 0:
emptyTCPorUDPcount += 1
if emptyTCPorUDPcount == len(self.packets):
raise RuntimeError('Flow {} does not contain any packets({}) with non-empty layer4 payload.'.format(self.flow_id, len(self.packets)))
if self.pktdump is None:
if self.flow_id == 0:
self.pktdump = scapy.all.PcapWriter('packet-{}.pcap'.format(self.__suffix),
append=True, sync=True)
else:
self.pktdump = scapy.all.PcapWriter('flow-{}-{}.pcap'.format(self.__suffix, self.flow_id),
append=False, sync=True)
for packet in self.packets:
self.pktdump.write(PcapPacket.getIp(packet))
self.pktdump.close()
self.__dumped = True
return True
def defaultArgumentParser():
parser = argparse.ArgumentParser(description='nDPIsrvd options', formatter_class=argparse.ArgumentDefaultsHelpFormatter)
def defaultArgumentParser(desc='nDPIsrvd Python Interface',
help_formatter=argparse.ArgumentDefaultsHelpFormatter):
parser = argparse.ArgumentParser(description=desc, formatter_class=help_formatter)
parser.add_argument('--host', type=str, help='nDPIsrvd host IP')
parser.add_argument('--port', type=int, default=DEFAULT_PORT, help='nDPIsrvd TCP port')
parser.add_argument('--unix', type=str, help='nDPIsrvd unix socket path')
return parser
def toSeconds(usec):
return usec / (1000 * 1000)
def validateAddress(args):
tcp_addr_set = False
address = None
if args.host is None:
address_tcpip = (DEFAULT_HOST, DEFAULT_PORT)
address_tcpip = (DEFAULT_HOST, args.port)
else:
address_tcpip = (args.host, args.port)
tcp_addr_set = True
@@ -390,27 +516,50 @@ def validateAddress(args):
return address
global schema
schema = {'packet_event_schema' : None, 'basic_event_schema' : None, 'daemon_event_schema' : None, 'flow_event_schema' : None}
schema = {'packet_event_schema' : None, 'error_event_schema' : None, 'daemon_event_schema' : None, 'flow_event_schema' : None}
def initSchemaValidator(schema_dirs=[]):
if len(schema_dirs) == 0:
schema_dirs += [os.path.dirname(sys.argv[0]) + '/../../schema']
schema_dirs += [os.path.dirname(sys.argv[0]) + '/../share/nDPId']
schema_dirs += [sys.base_prefix + '/share/nDPId']
def initSchemaValidator(schema_dir='./schema'):
for key in schema:
with open(schema_dir + '/' + str(key) + '.json', 'r') as schema_file:
schema[key] = json.load(schema_file)
for schema_dir in schema_dirs:
try:
with open(schema_dir + '/' + str(key) + '.json', 'r') as schema_file:
schema[key] = json.load(schema_file)
except FileNotFoundError:
continue
else:
break
def validateAgainstSchema(json_dict):
import jsonschema
if 'packet_event_id' in json_dict:
jsonschema.validate(instance=json_dict, schema=schema['packet_event_schema'])
try:
jsonschema.Draft7Validator(schema=schema['packet_event_schema']).validate(instance=json_dict)
except AttributeError:
jsonschema.validate(instance=json_dict, schema=schema['packet_event_schema'])
return True
if 'basic_event_id' in json_dict:
jsonschema.validate(instance=json_dict, schema=schema['basic_event_schema'])
if 'error_event_id' in json_dict:
try:
jsonschema.Draft7Validator(schema=schema['error_event_schema']).validate(instance=json_dict)
except AttributeError:
jsonschema.validate(instance=json_dict, schema=schema['error_event_schema'])
return True
if 'daemon_event_id' in json_dict:
jsonschema.validate(instance=json_dict, schema=schema['daemon_event_schema'])
try:
jsonschema.Draft7Validator(schema=schema['daemon_event_schema']).validate(instance=json_dict)
except AttributeError:
jsonschema.validate(instance=json_dict, schema=schema['daemon_event_schema'])
return True
if 'flow_event_id' in json_dict:
jsonschema.validate(instance=json_dict, schema=schema['flow_event_schema'])
try:
jsonschema.Draft7Validator(schema=schema['flow_event_schema']).validate(instance=json_dict)
except AttributeError:
jsonschema.validate(instance=json_dict, schema=schema['flow_event_schema'])
return True
return False

View File

@@ -7,7 +7,7 @@ matrix:
compiler: clang
- os: osx
script:
- make -C tests EXTRA_CFLAGS="-W -Wall -Wextra"
- make -C tests EXTRA_CFLAGS="-W -Wall -Wextra -Wswitch-default"
- make -C tests clean ; make -C tests pedantic
- make -C tests clean ; make -C tests pedantic EXTRA_CFLAGS=-DNO_DECLTYPE
- make -C tests clean ; make -C tests cplusplus

View File

@@ -1,4 +1,4 @@
Copyright (c) 2005-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,8 +1,8 @@
[![Build status](https://travis-ci.org/Quuxplusone/uthash.svg?branch=travis-ci)](https://travis-ci.org/troydhanson/uthash)
[![Build status](https://api.travis-ci.org/troydhanson/uthash.svg?branch=master)](https://travis-ci.org/troydhanson/uthash)
Documentation for uthash is available at:
http://troydhanson.github.com/uthash/
https://troydhanson.github.io/uthash/

View File

@@ -5,6 +5,21 @@ Click to return to the link:index.html[uthash home page].
NOTE: This ChangeLog may be incomplete and/or incorrect. See the git commit log.
Version 2.3.0 (2021-02-25)
--------------------------
* remove HASH_FCN; the HASH_FUNCTION and HASH_KEYCMP macros now behave similarly
* remove uthash_memcmp (deprecated in v2.1.0) in favor of HASH_KEYCMP
* silence -Wswitch-default warnings (thanks, Olaf Bergmann!)
Version 2.2.0 (2020-12-17)
--------------------------
* add HASH_NO_STDINT for platforms without C99 <stdint.h>
* silence many -Wcast-qual warnings (thanks, Olaf Bergmann!)
* skip hash computation when finding in an empty hash (thanks, Huansong Fu!)
* rename oom to utarray_oom, in utarray.h (thanks, Hong Xu!)
* rename oom to utstring_oom, in utstring.h (thanks, Hong Xu!)
* remove MurmurHash/HASH_MUR
Version 2.1.0 (2018-12-20)
--------------------------
* silence some Clang static analysis warnings
@@ -56,7 +71,7 @@ Version 1.9.8 (2013-03-10)
* `HASH_REPLACE` now in uthash (thanks, Nick Vatamaniuc!)
* fixed clang warnings (thanks wynnw!)
* fixed `utarray_insert` when inserting past array end (thanks Rob Willett!)
* you can now find http://troydhanson.github.com/uthash/[uthash on GitHub]
* you can now find http://troydhanson.github.io/uthash/[uthash on GitHub]
* there's a https://groups.google.com/d/forum/uthash[uthash Google Group]
* uthash has been downloaded 29,000+ times since 2006 on SourceForge

View File

@@ -14,7 +14,7 @@
<div id="topnav">
<a href="http://github.com/troydhanson/uthash">GitHub page</a> &gt;
uthash home <!-- http://troydhanson.github.com/uthash/ -->
uthash home <!-- http://troydhanson.github.io/uthash/ -->
<a href="https://twitter.com/share" class="twitter-share-button" data-via="troydhanson">Tweet</a>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
@@ -72,7 +72,7 @@ struct my_struct {
struct my_struct *users = NULL;
void add_user(struct my_struct *s) {
HASH_ADD_INT( users, id, s );
HASH_ADD_INT(users, id, s);
}
</pre>
@@ -86,7 +86,7 @@ Example 2. Looking up an item in a hash.
struct my_struct *find_user(int user_id) {
struct my_struct *s;
HASH_FIND_INT( users, &amp;user_id, s );
HASH_FIND_INT(users, &amp;user_id, s);
return s;
}
@@ -100,7 +100,7 @@ Example 3. Deleting an item from a hash.
<pre>
void delete_user(struct my_struct *user) {
HASH_DEL( users, user);
HASH_DEL(users, user);
}
</pre>

View File

@@ -13,7 +13,7 @@
</div> <!-- banner -->
<div id="topnav">
<a href="http://troydhanson.github.com/uthash/">uthash home</a> &gt;
<a href="http://troydhanson.github.io/uthash/">uthash home</a> &gt;
BSD license
</div>
@@ -21,7 +21,7 @@
<div id="mid">
<div id="main">
<pre>
Copyright (c) 2005-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -1,7 +1,7 @@
uthash User Guide
=================
Troy D. Hanson, Arthur O'Dwyer
v2.1.0, December 2018
v2.3.0, February 2021
To download uthash, follow this link back to the
https://github.com/troydhanson/uthash[GitHub project page].
@@ -215,10 +215,10 @@ a unique value. Then call `HASH_ADD`. (Here we use the convenience macro
void add_user(int user_id, char *name) {
struct my_struct *s;
s = malloc(sizeof(struct my_struct));
s = malloc(sizeof *s);
s->id = user_id;
strcpy(s->name, name);
HASH_ADD_INT( users, id, s ); /* id: name of key field */
HASH_ADD_INT(users, id, s); /* id: name of key field */
}
----------------------------------------------------------------------
@@ -227,7 +227,7 @@ second parameter is the 'name' of the key field. Here, this is `id`. The
last parameter is a pointer to the structure being added.
[[validc]]
.Wait.. the field name is a parameter?
.Wait.. the parameter is a field name?
*******************************************************************************
If you find it strange that `id`, which is the 'name of a field' in the
structure, can be passed as a parameter... welcome to the world of macros. Don't
@@ -256,10 +256,10 @@ Otherwise we just modify the structure that already exists.
struct my_struct *s;
HASH_FIND_INT(users, &user_id, s); /* id already in the hash? */
if (s==NULL) {
if (s == NULL) {
s = (struct my_struct *)malloc(sizeof *s);
s->id = user_id;
HASH_ADD_INT( users, id, s ); /* id: name of key field */
HASH_ADD_INT(users, id, s); /* id: name of key field */
}
strcpy(s->name, name);
}
@@ -284,7 +284,7 @@ right.
/* bad */
void add_user(struct my_struct *users, int user_id, char *name) {
...
HASH_ADD_INT( users, id, s );
HASH_ADD_INT(users, id, s);
}
You really need to pass 'a pointer' to the hash pointer:
@@ -292,7 +292,7 @@ You really need to pass 'a pointer' to the hash pointer:
/* good */
void add_user(struct my_struct **users, int user_id, char *name) { ...
...
HASH_ADD_INT( *users, id, s );
HASH_ADD_INT(*users, id, s);
}
Note that we dereferenced the pointer in the `HASH_ADD` also.
@@ -319,7 +319,7 @@ To look up a structure in a hash, you need its key. Then call `HASH_FIND`.
struct my_struct *find_user(int user_id) {
struct my_struct *s;
HASH_FIND_INT( users, &user_id, s ); /* s: output pointer */
HASH_FIND_INT(users, &user_id, s); /* s: output pointer */
return s;
}
----------------------------------------------------------------------
@@ -376,8 +376,8 @@ void delete_all() {
struct my_struct *current_user, *tmp;
HASH_ITER(hh, users, current_user, tmp) {
HASH_DEL(users,current_user); /* delete; users advances to next */
free(current_user); /* optional- if you want to free */
HASH_DEL(users, current_user); /* delete; users advances to next */
free(current_user); /* optional- if you want to free */
}
}
----------------------------------------------------------------------
@@ -387,7 +387,7 @@ All-at-once deletion
If you only want to delete all the items, but not free them or do any
per-element clean up, you can do this more efficiently in a single operation:
HASH_CLEAR(hh,users);
HASH_CLEAR(hh, users);
Afterward, the list head (here, `users`) will be set to `NULL`.
@@ -403,7 +403,7 @@ num_users = HASH_COUNT(users);
printf("there are %u users\n", num_users);
----------------------------------------------------------------------
Incidentally, this works even the list (`users`, here) is `NULL`, in
Incidentally, this works even if the list head (here, `users`) is `NULL`, in
which case the count is 0.
Iterating and sorting
@@ -417,7 +417,7 @@ following the `hh.next` pointer.
void print_users() {
struct my_struct *s;
for(s=users; s != NULL; s=s->hh.next) {
for (s = users; s != NULL; s = s->hh.next) {
printf("user id %d: name %s\n", s->id, s->name);
}
}
@@ -430,7 +430,7 @@ the hash, starting from any known item.
Deletion-safe iteration
^^^^^^^^^^^^^^^^^^^^^^^
In the example above, it would not be safe to delete and free `s` in the body
of the 'for' loop, (because `s` is derefenced each time the loop iterates).
of the 'for' loop, (because `s` is dereferenced each time the loop iterates).
This is easy to rewrite correctly (by copying the `s->hh.next` pointer to a
temporary variable 'before' freeing `s`), but it comes up often enough that a
deletion-safe iteration macro, `HASH_ITER`, is included. It expands to a
@@ -452,14 +452,14 @@ doubly-linked list.
*******************************************************************************
If you're using uthash in a C++ program, you need an extra cast on the `for`
iterator, e.g., `s=(struct my_struct*)s->hh.next`.
iterator, e.g., `s = static_cast<my_struct*>(s->hh.next)`.
Sorting
^^^^^^^
The items in the hash are visited in "insertion order" when you follow the
`hh.next` pointer. You can sort the items into a new order using `HASH_SORT`.
HASH_SORT( users, name_sort );
HASH_SORT(users, name_sort);
The second argument is a pointer to a comparison function. It must accept two
pointer arguments (the items to compare), and must return an `int` which is
@@ -479,20 +479,20 @@ Below, `name_sort` and `id_sort` are two examples of sort functions.
.Sorting the items in the hash
----------------------------------------------------------------------
int name_sort(struct my_struct *a, struct my_struct *b) {
return strcmp(a->name,b->name);
int by_name(const struct my_struct *a, const struct my_struct *b) {
return strcmp(a->name, b->name);
}
int id_sort(struct my_struct *a, struct my_struct *b) {
int by_id(const struct my_struct *a, const struct my_struct *b) {
return (a->id - b->id);
}
void sort_by_name() {
HASH_SORT(users, name_sort);
HASH_SORT(users, by_name);
}
void sort_by_id() {
HASH_SORT(users, id_sort);
HASH_SORT(users, by_id);
}
----------------------------------------------------------------------
@@ -516,85 +516,100 @@ Follow the prompts to try the program.
.A complete program
----------------------------------------------------------------------
#include <stdio.h> /* gets */
#include <stdio.h> /* printf */
#include <stdlib.h> /* atoi, malloc */
#include <string.h> /* strcpy */
#include "uthash.h"
struct my_struct {
int id; /* key */
char name[10];
char name[21];
UT_hash_handle hh; /* makes this structure hashable */
};
struct my_struct *users = NULL;
void add_user(int user_id, char *name) {
void add_user(int user_id, const char *name)
{
struct my_struct *s;
HASH_FIND_INT(users, &user_id, s); /* id already in the hash? */
if (s==NULL) {
s = (struct my_struct *)malloc(sizeof *s);
s->id = user_id;
HASH_ADD_INT( users, id, s ); /* id: name of key field */
if (s == NULL) {
s = (struct my_struct*)malloc(sizeof *s);
s->id = user_id;
HASH_ADD_INT(users, id, s); /* id is the key field */
}
strcpy(s->name, name);
}
struct my_struct *find_user(int user_id) {
struct my_struct *find_user(int user_id)
{
struct my_struct *s;
HASH_FIND_INT( users, &user_id, s ); /* s: output pointer */
HASH_FIND_INT(users, &user_id, s); /* s: output pointer */
return s;
}
void delete_user(struct my_struct *user) {
void delete_user(struct my_struct *user)
{
HASH_DEL(users, user); /* user: pointer to deletee */
free(user);
}
void delete_all() {
struct my_struct *current_user, *tmp;
void delete_all()
{
struct my_struct *current_user;
struct my_struct *tmp;
HASH_ITER(hh, users, current_user, tmp) {
HASH_DEL(users, current_user); /* delete it (users advances to next) */
free(current_user); /* free it */
}
HASH_ITER(hh, users, current_user, tmp) {
HASH_DEL(users, current_user); /* delete it (users advances to next) */
free(current_user); /* free it */
}
}
void print_users() {
void print_users()
{
struct my_struct *s;
for(s=users; s != NULL; s=(struct my_struct*)(s->hh.next)) {
for (s = users; s != NULL; s = (struct my_struct*)(s->hh.next)) {
printf("user id %d: name %s\n", s->id, s->name);
}
}
int name_sort(struct my_struct *a, struct my_struct *b) {
return strcmp(a->name,b->name);
int by_name(const struct my_struct *a, const struct my_struct *b)
{
return strcmp(a->name, b->name);
}
int id_sort(struct my_struct *a, struct my_struct *b) {
int by_id(const struct my_struct *a, const struct my_struct *b)
{
return (a->id - b->id);
}
void sort_by_name() {
HASH_SORT(users, name_sort);
const char *getl(const char *prompt)
{
static char buf[21];
char *p;
printf("%s? ", prompt); fflush(stdout);
p = fgets(buf, sizeof(buf), stdin);
if (p == NULL || (p = strchr(buf, '\n')) == NULL) {
puts("Invalid input!");
exit(EXIT_FAILURE);
}
*p = '\0';
return buf;
}
void sort_by_id() {
HASH_SORT(users, id_sort);
}
int main(int argc, char *argv[]) {
char in[10];
int id=1, running=1;
int main()
{
int id = 1;
int running = 1;
struct my_struct *s;
unsigned num_users;
int temp;
while (running) {
printf(" 1. add user\n");
printf(" 2. add/rename user by id\n");
printf(" 2. add or rename user by id\n");
printf(" 3. find user\n");
printf(" 4. delete user\n");
printf(" 5. delete all users\n");
@@ -603,47 +618,44 @@ int main(int argc, char *argv[]) {
printf(" 8. print users\n");
printf(" 9. count users\n");
printf("10. quit\n");
gets(in);
switch(atoi(in)) {
switch (atoi(getl("Command"))) {
case 1:
printf("name?\n");
add_user(id++, gets(in));
add_user(id++, getl("Name (20 char max)"));
break;
case 2:
printf("id?\n");
gets(in); id = atoi(in);
printf("name?\n");
add_user(id, gets(in));
temp = atoi(getl("ID"));
add_user(temp, getl("Name (20 char max)"));
break;
case 3:
printf("id?\n");
s = find_user(atoi(gets(in)));
s = find_user(atoi(getl("ID to find")));
printf("user: %s\n", s ? s->name : "unknown");
break;
case 4:
printf("id?\n");
s = find_user(atoi(gets(in)));
if (s) delete_user(s);
else printf("id unknown\n");
s = find_user(atoi(getl("ID to delete")));
if (s) {
delete_user(s);
} else {
printf("id unknown\n");
}
break;
case 5:
delete_all();
break;
case 6:
sort_by_name();
HASH_SORT(users, by_name);
break;
case 7:
sort_by_id();
HASH_SORT(users, by_id);
break;
case 8:
print_users();
break;
case 9:
num_users=HASH_COUNT(users);
printf("there are %u users\n", num_users);
temp = HASH_COUNT(users);
printf("there are %d users\n", temp);
break;
case 10:
running=0;
running = 0;
break;
}
}
@@ -720,10 +732,10 @@ int main(int argc, char *argv[]) {
s = (struct my_struct *)malloc(sizeof *s);
strcpy(s->name, names[i]);
s->id = i;
HASH_ADD_STR( users, name, s );
HASH_ADD_STR(users, name, s);
}
HASH_FIND_STR( users, "betty", s);
HASH_FIND_STR(users, "betty", s);
if (s) printf("betty's id is %d\n", s->id);
/* free the hash table contents */
@@ -766,10 +778,10 @@ int main(int argc, char *argv[]) {
s = (struct my_struct *)malloc(sizeof *s);
s->name = names[i];
s->id = i;
HASH_ADD_KEYPTR( hh, users, s->name, strlen(s->name), s );
HASH_ADD_KEYPTR(hh, users, s->name, strlen(s->name), s);
}
HASH_FIND_STR( users, "betty", s);
HASH_FIND_STR(users, "betty", s);
if (s) printf("betty's id is %d\n", s->id);
/* free the hash table contents */
@@ -812,12 +824,12 @@ int main() {
if (!e) return -1;
e->key = (void*)someaddr;
e->i = 1;
HASH_ADD_PTR(hash,key,e);
HASH_ADD_PTR(hash, key, e);
HASH_FIND_PTR(hash, &someaddr, d);
if (d) printf("found\n");
/* release memory */
HASH_DEL(hash,e);
HASH_DEL(hash, e);
free(e);
return 0;
}
@@ -924,7 +936,7 @@ int main(int argc, char *argv[]) {
int beijing[] = {0x5317, 0x4eac}; /* UTF-32LE for 北京 */
/* allocate and initialize our structure */
msg = (msg_t *)malloc( sizeof(msg_t) + sizeof(beijing) );
msg = (msg_t *)malloc(sizeof(msg_t) + sizeof(beijing));
memset(msg, 0, sizeof(msg_t)+sizeof(beijing)); /* zero fill */
msg->len = sizeof(beijing);
msg->encoding = UTF32;
@@ -936,16 +948,16 @@ int main(int argc, char *argv[]) {
- offsetof(msg_t, encoding); /* offset of first key field */
/* add our structure to the hash table */
HASH_ADD( hh, msgs, encoding, keylen, msg);
HASH_ADD(hh, msgs, encoding, keylen, msg);
/* look it up to prove that it worked :-) */
msg=NULL;
msg = NULL;
lookup_key = (lookup_key_t *)malloc(sizeof(*lookup_key) + sizeof(beijing));
memset(lookup_key, 0, sizeof(*lookup_key) + sizeof(beijing));
lookup_key->encoding = UTF32;
memcpy(lookup_key->text, beijing, sizeof(beijing));
HASH_FIND( hh, msgs, &lookup_key->encoding, keylen, msg );
HASH_FIND(hh, msgs, &lookup_key->encoding, keylen, msg);
if (msg) printf("found \n");
free(lookup_key);
@@ -1028,7 +1040,7 @@ typedef struct item {
UT_hash_handle hh;
} item_t;
item_t *items=NULL;
item_t *items = NULL;
int main(int argc, char *argvp[]) {
item_t *item1, *item2, *tmp1, *tmp2;
@@ -1119,7 +1131,7 @@ always used with the `users_by_name` hash table).
int i;
char *name;
s = malloc(sizeof(struct my_struct));
s = malloc(sizeof *s);
s->id = 1;
strcpy(s->username, "thanson");
@@ -1128,7 +1140,7 @@ always used with the `users_by_name` hash table).
HASH_ADD(hh2, users_by_name, username, strlen(s->username), s);
/* find user by ID in the "users_by_id" hash table */
i=1;
i = 1;
HASH_FIND(hh1, users_by_id, &i, sizeof(int), s);
if (s) printf("found id %d: %s\n", i, s->username);
@@ -1155,7 +1167,7 @@ The `HASH_ADD_INORDER*` macros work just like their `HASH_ADD*` counterparts, bu
with an additional comparison-function argument:
int name_sort(struct my_struct *a, struct my_struct *b) {
return strcmp(a->name,b->name);
return strcmp(a->name, b->name);
}
HASH_ADD_KEYPTR_INORDER(hh, items, &item->name, strlen(item->name), item, name_sort);
@@ -1183,7 +1195,7 @@ Now we can define two sort functions, then use `HASH_SRT`.
}
int sort_by_name(struct my_struct *a, struct my_struct *b) {
return strcmp(a->username,b->username);
return strcmp(a->username, b->username);
}
HASH_SRT(hh1, users_by_id, sort_by_id);
@@ -1240,7 +1252,8 @@ for a structure to be usable with `HASH_SELECT`, it must have two or more hash
handles. (As described <<multihash,here>>, a structure can exist in many
hash tables at the same time; it must have a separate hash handle for each one).
user_t *users=NULL, *admins=NULL; /* two hash tables */
user_t *users = NULL; /* hash table of users */
user_t *admins = NULL; /* hash table of admins */
typedef struct {
int id;
@@ -1252,25 +1265,26 @@ Now suppose we have added some users, and want to select just the administrator
users who have id's less than 1024.
#define is_admin(x) (((user_t*)x)->id < 1024)
HASH_SELECT(ah,admins,hh,users,is_admin);
HASH_SELECT(ah, admins, hh, users, is_admin);
The first two parameters are the 'destination' hash handle and hash table, the
second two parameters are the 'source' hash handle and hash table, and the last
parameter is the 'select condition'. Here we used a macro `is_admin()` but we
parameter is the 'select condition'. Here we used a macro `is_admin(x)` but we
could just as well have used a function.
int is_admin(void *userv) {
user_t *user = (user_t*)userv;
int is_admin(const void *userv) {
user_t *user = (const user_t*)userv;
return (user->id < 1024) ? 1 : 0;
}
If the select condition always evaluates to true, this operation is
essentially a 'merge' of the source hash into the destination hash. Of course,
the source hash remains unchanged under any use of `HASH_SELECT`. It only adds
items to the destination hash selectively.
essentially a 'merge' of the source hash into the destination hash.
The two hash handles must differ. An example of using `HASH_SELECT` is included
in `tests/test36.c`.
`HASH_SELECT` adds items to the destination without removing them from
the source; the source hash table remains unchanged. The destination hash table
must not be the same as the source hash table.
An example of using `HASH_SELECT` is included in `tests/test36.c`.
[[hash_keycompare]]
Specifying an alternate key comparison function
@@ -1290,7 +1304,7 @@ that do not provide `memcmp`, you can substitute your own implementation.
----------------------------------------------------------------------------
#undef HASH_KEYCMP
#define HASH_KEYCMP(a,b,len) bcmp(a,b,len)
#define HASH_KEYCMP(a,b,len) bcmp(a, b, len)
----------------------------------------------------------------------------
Another reason to substitute your own key comparison function is if your "key" is not
@@ -1631,7 +1645,7 @@ If your application uses its own custom allocator, uthash can use them too.
/* re-define, specifying alternate functions */
#define uthash_malloc(sz) my_malloc(sz)
#define uthash_free(ptr,sz) my_free(ptr)
#define uthash_free(ptr, sz) my_free(ptr)
...
----------------------------------------------------------------------------
@@ -1647,7 +1661,7 @@ provide these functions, you can substitute your own implementations.
----------------------------------------------------------------------------
#undef uthash_bzero
#define uthash_bzero(a,len) my_bzero(a,len)
#define uthash_bzero(a, len) my_bzero(a, len)
#undef uthash_strlen
#define uthash_strlen(s) my_strlen(s)
@@ -1754,7 +1768,7 @@ concurrent readers (since uthash 1.5).
For example using pthreads you can create an rwlock like this:
pthread_rwlock_t lock;
if (pthread_rwlock_init(&lock,NULL) != 0) fatal("can't create rwlock");
if (pthread_rwlock_init(&lock, NULL) != 0) fatal("can't create rwlock");
Then, readers must acquire the read lock before doing any `HASH_FIND` calls or
before iterating over the hash elements:
@@ -1795,10 +1809,10 @@ In order to use the convenience macros,
|===============================================================================
|macro | arguments
|HASH_ADD_INT | (head, keyfield_name, item_ptr)
|HASH_REPLACE_INT | (head, keyfiled_name, item_ptr,replaced_item_ptr)
|HASH_REPLACE_INT | (head, keyfield_name, item_ptr, replaced_item_ptr)
|HASH_FIND_INT | (head, key_ptr, item_ptr)
|HASH_ADD_STR | (head, keyfield_name, item_ptr)
|HASH_REPLACE_STR | (head,keyfield_name, item_ptr, replaced_item_ptr)
|HASH_REPLACE_STR | (head, keyfield_name, item_ptr, replaced_item_ptr)
|HASH_FIND_STR | (head, key_ptr, item_ptr)
|HASH_ADD_PTR | (head, keyfield_name, item_ptr)
|HASH_REPLACE_PTR | (head, keyfield_name, item_ptr, replaced_item_ptr)

View File

@@ -1,7 +1,7 @@
utarray: dynamic array macros for C
===================================
Troy D. Hanson <tdh@tkhanson.net>
v2.1.0, December 2018
v2.3.0, February 2021
Here's a link back to the https://github.com/troydhanson/uthash[GitHub project page].

View File

@@ -1,7 +1,7 @@
utlist: linked list macros for C structures
===========================================
Troy D. Hanson <tdh@tkhanson.net>
v2.1.0, December 2018
v2.3.0, February 2021
Here's a link back to the https://github.com/troydhanson/uthash[GitHub project page].

View File

@@ -1,7 +1,7 @@
utringbuffer: dynamic ring-buffer macros for C
==============================================
Arthur O'Dwyer <arthur.j.odwyer@gmail.com>
v2.1.0, December 2018
v2.3.0, February 2021
Here's a link back to the https://github.com/troydhanson/uthash[GitHub project page].

View File

@@ -1,7 +1,7 @@
utstack: intrusive stack macros for C
=====================================
Arthur O'Dwyer <arthur.j.odwyer@gmail.com>
v2.1.0, December 2018
v2.3.0, February 2021
Here's a link back to the https://github.com/troydhanson/uthash[GitHub project page].

View File

@@ -1,7 +1,7 @@
utstring: dynamic string macros for C
=====================================
Troy D. Hanson <tdh@tkhanson.net>
v2.1.0, December 2018
v2.3.0, February 2021
Here's a link back to the https://github.com/troydhanson/uthash[GitHub project page].

View File

@@ -24,5 +24,5 @@
"src/utstring.h"
],
"version": "2.1.0"
"version": "2.3.0"
}

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2008-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2008-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -26,7 +26,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTARRAY_H
#define UTARRAY_H
#define UTARRAY_VERSION 2.1.0
#define UTARRAY_VERSION 2.3.0
#include <stddef.h> /* size_t */
#include <string.h> /* memset, etc */
@@ -232,8 +232,9 @@ typedef struct {
/* last we pre-define a few icd for common utarrays of ints and strings */
static void utarray_str_cpy(void *dst, const void *src) {
char **_src = (char**)src, **_dst = (char**)dst;
*_dst = (*_src == NULL) ? NULL : strdup(*_src);
char *const *srcc = (char *const *)src;
char **dstc = (char**)dst;
*dstc = (*srcc == NULL) ? NULL : strdup(*srcc);
}
static void utarray_str_dtor(void *elt) {
char **eltc = (char**)elt;

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2003-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2003-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -24,12 +24,22 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTHASH_H
#define UTHASH_H
#define UTHASH_VERSION 2.1.0
#define UTHASH_VERSION 2.3.0
#include <string.h> /* memcmp, memset, strlen */
#include <stddef.h> /* ptrdiff_t */
#include <stdlib.h> /* exit */
#if defined(HASH_DEFINE_OWN_STDINT) && HASH_DEFINE_OWN_STDINT
/* This codepath is provided for backward compatibility, but I plan to remove it. */
#warning "HASH_DEFINE_OWN_STDINT is deprecated; please use HASH_NO_STDINT instead"
typedef unsigned int uint32_t;
typedef unsigned char uint8_t;
#elif defined(HASH_NO_STDINT) && HASH_NO_STDINT
#else
#include <stdint.h> /* uint8_t, uint32_t */
#endif
/* These macros use decltype or the earlier __typeof GNU extension.
As decltype is only available in newer compilers (VS2010 or gcc 4.3+
when compiling c++ source) this code uses whatever method is needed
@@ -62,23 +72,6 @@ do {
} while (0)
#endif
/* a number of the hash function use uint32_t which isn't defined on Pre VS2010 */
#if defined(_WIN32)
#if defined(_MSC_VER) && _MSC_VER >= 1600
#include <stdint.h>
#elif defined(__WATCOMC__) || defined(__MINGW32__) || defined(__CYGWIN__)
#include <stdint.h>
#else
typedef unsigned int uint32_t;
typedef unsigned char uint8_t;
#endif
#elif defined(__GNUC__) && !defined(__VXWORKS__)
#include <stdint.h>
#else
typedef unsigned int uint32_t;
typedef unsigned char uint8_t;
#endif
#ifndef uthash_malloc
#define uthash_malloc(sz) malloc(sz) /* malloc fcn */
#endif
@@ -92,15 +85,12 @@ typedef unsigned char uint8_t;
#define uthash_strlen(s) strlen(s)
#endif
#ifdef uthash_memcmp
/* This warning will not catch programs that define uthash_memcmp AFTER including uthash.h. */
#warning "uthash_memcmp is deprecated; please use HASH_KEYCMP instead"
#else
#define uthash_memcmp(a,b,n) memcmp(a,b,n)
#ifndef HASH_FUNCTION
#define HASH_FUNCTION(keyptr,keylen,hashv) HASH_JEN(keyptr, keylen, hashv)
#endif
#ifndef HASH_KEYCMP
#define HASH_KEYCMP(a,b,n) uthash_memcmp(a,b,n)
#define HASH_KEYCMP(a,b,n) memcmp(a,b,n)
#endif
#ifndef uthash_noexpand_fyi
@@ -158,7 +148,7 @@ do {
#define HASH_VALUE(keyptr,keylen,hashv) \
do { \
HASH_FCN(keyptr, keylen, hashv); \
HASH_FUNCTION(keyptr, keylen, hashv); \
} while (0)
#define HASH_FIND_BYHASHVALUE(hh,head,keyptr,keylen,hashval,out) \
@@ -408,7 +398,7 @@ do {
do { \
IF_HASH_NONFATAL_OOM( int _ha_oomed = 0; ) \
(add)->hh.hashv = (hashval); \
(add)->hh.key = (char*) (keyptr); \
(add)->hh.key = (const void*) (keyptr); \
(add)->hh.keylen = (unsigned) (keylen_in); \
if (!(head)) { \
(add)->hh.next = NULL; \
@@ -590,13 +580,6 @@ do {
#define HASH_EMIT_KEY(hh,head,keyptr,fieldlen)
#endif
/* default to Jenkin's hash unless overridden e.g. DHASH_FUNCTION=HASH_SAX */
#ifdef HASH_FUNCTION
#define HASH_FCN HASH_FUNCTION
#else
#define HASH_FCN HASH_JEN
#endif
/* The Bernstein hash function, used in Perl prior to v5.6. Note (x<<5+x)=x*33. */
#define HASH_BER(key,keylen,hashv) \
do { \
@@ -695,7 +678,8 @@ do {
case 4: _hj_i += ( (unsigned)_hj_key[3] << 24 ); /* FALLTHROUGH */ \
case 3: _hj_i += ( (unsigned)_hj_key[2] << 16 ); /* FALLTHROUGH */ \
case 2: _hj_i += ( (unsigned)_hj_key[1] << 8 ); /* FALLTHROUGH */ \
case 1: _hj_i += _hj_key[0]; \
case 1: _hj_i += _hj_key[0]; /* FALLTHROUGH */ \
default: ; \
} \
HASH_JEN_MIX(_hj_i, _hj_j, hashv); \
} while (0)
@@ -743,6 +727,8 @@ do {
case 1: hashv += *_sfh_key; \
hashv ^= hashv << 10; \
hashv += hashv >> 1; \
break; \
default: ; \
} \
\
/* Force "avalanching" of final 127 bits */ \
@@ -764,7 +750,7 @@ do {
} \
while ((out) != NULL) { \
if ((out)->hh.hashv == (hashval) && (out)->hh.keylen == (keylen_in)) { \
if (HASH_KEYCMP((out)->hh.key, keyptr, keylen_in) == 0) { \
if (HASH_KEYCMP((out)->hh.key, keyptr, keylen_in) == 0) { \
break; \
} \
} \
@@ -850,12 +836,12 @@ do {
struct UT_hash_handle *_he_thh, *_he_hh_nxt; \
UT_hash_bucket *_he_new_buckets, *_he_newbkt; \
_he_new_buckets = (UT_hash_bucket*)uthash_malloc( \
2UL * (tbl)->num_buckets * sizeof(struct UT_hash_bucket)); \
sizeof(struct UT_hash_bucket) * (tbl)->num_buckets * 2U); \
if (!_he_new_buckets) { \
HASH_RECORD_OOM(oomed); \
} else { \
uthash_bzero(_he_new_buckets, \
2UL * (tbl)->num_buckets * sizeof(struct UT_hash_bucket)); \
sizeof(struct UT_hash_bucket) * (tbl)->num_buckets * 2U); \
(tbl)->ideal_chain_maxlen = \
((tbl)->num_items >> ((tbl)->log2_num_buckets+1U)) + \
((((tbl)->num_items & (((tbl)->num_buckets*2U)-1U)) != 0U) ? 1U : 0U); \
@@ -1142,7 +1128,7 @@ typedef struct UT_hash_handle {
void *next; /* next element in app order */
struct UT_hash_handle *hh_prev; /* previous hh in bucket order */
struct UT_hash_handle *hh_next; /* next hh in bucket order */
void *key; /* ptr to enclosing struct's key */
const void *key; /* ptr to enclosing struct's key */
unsigned keylen; /* enclosing struct's key len */
unsigned hashv; /* result of hash-fcn(key) */
} UT_hash_handle;

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2007-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2007-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -24,7 +24,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTLIST_H
#define UTLIST_H
#define UTLIST_VERSION 2.1.0
#define UTLIST_VERSION 2.3.0
#include <assert.h>

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2015-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2015-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -26,7 +26,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTRINGBUFFER_H
#define UTRINGBUFFER_H
#define UTRINGBUFFER_VERSION 2.1.0
#define UTRINGBUFFER_VERSION 2.3.0
#include <stdlib.h>
#include <string.h>

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2018-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2018-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -24,7 +24,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTSTACK_H
#define UTSTACK_H
#define UTSTACK_VERSION 2.1.0
#define UTSTACK_VERSION 2.3.0
/*
* This file contains macros to manipulate a singly-linked list as a stack.
@@ -35,9 +35,9 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
* struct item {
* int id;
* struct item *next;
* }
* };
*
* struct item *stack = NULL:
* struct item *stack = NULL;
*
* int main() {
* int count;

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2008-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2008-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -26,7 +26,7 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#ifndef UTSTRING_H
#define UTSTRING_H
#define UTSTRING_VERSION 2.1.0
#define UTSTRING_VERSION 2.3.0
#include <stdlib.h>
#include <string.h>

View File

@@ -12,7 +12,7 @@ PROGS = test1 test2 test3 test4 test5 test6 test7 test8 test9 \
test66 test67 test68 test69 test70 test71 test72 test73 \
test74 test75 test76 test77 test78 test79 test80 test81 \
test82 test83 test84 test85 test86 test87 test88 test89 \
test90 test91 test92 test93 test94 test95
test90 test91 test92 test93 test94 test95 test96
CFLAGS += -I$(HASHDIR)
#CFLAGS += -DHASH_BLOOM=16
#CFLAGS += -O2

View File

@@ -7,7 +7,7 @@ test2: make 10-item hash, lookup items with even keys, print
test3: make 10-item hash, delete items with even keys, print others
test4: 10 structs have dual hash handles, separate keys
test5: 10 structs have dual hash handles, lookup evens by alt key
test6: test alt malloc macros (and alt memcmp macro)
test6: test alt malloc macros (and alt key-comparison macro)
test7: test alt malloc macros with 1000 structs so bucket expansion occurs
test8: test num_items counter in UT_hash_handle
test9: test "find" after bucket expansion
@@ -89,10 +89,15 @@ test84: test HASH_REPLACE_STR with char* key
test85: test HASH_OVERHEAD on null and non null hash
test86: test *_APPEND_ELEM / *_PREPEND_ELEM (Thilo Schulz)
test87: test HASH_ADD_INORDER() macro (Thilo Schulz)
test88: test alt memcmp and strlen macros
test88: test alt key-comparison and strlen macros
test89: test code from the tinydtls project
test90: regression-test HASH_ADD_KEYPTR_INORDER (IronBug)
test91: test LL_INSERT_INORDER etc.
test92: HASH_NONFATAL_OOM
test93: alt_fatal
test94: utlist with fields named other than 'next' and 'prev'
test95: utstack
test96: HASH_FUNCTION + HASH_KEYCMP
Other Make targets
================================================================================

View File

@@ -1,25 +1,25 @@
#include <stdio.h> /* gets */
#include <stdio.h> /* printf */
#include <stdlib.h> /* atoi, malloc */
#include <string.h> /* strcpy */
#include "uthash.h"
struct my_struct {
int id; /* key */
char name[10];
char name[21];
UT_hash_handle hh; /* makes this structure hashable */
};
struct my_struct *users = NULL;
void add_user(int user_id, char *name)
void add_user(int user_id, const char *name)
{
struct my_struct *s;
HASH_FIND_INT(users, &user_id, s); /* id already in the hash? */
if (s==NULL) {
s = (struct my_struct*)malloc(sizeof(struct my_struct));
if (s == NULL) {
s = (struct my_struct*)malloc(sizeof *s);
s->id = user_id;
HASH_ADD_INT( users, id, s ); /* id: name of key field */
HASH_ADD_INT(users, id, s); /* id is the key field */
}
strcpy(s->name, name);
}
@@ -28,23 +28,24 @@ struct my_struct *find_user(int user_id)
{
struct my_struct *s;
HASH_FIND_INT( users, &user_id, s ); /* s: output pointer */
HASH_FIND_INT(users, &user_id, s); /* s: output pointer */
return s;
}
void delete_user(struct my_struct *user)
{
HASH_DEL( users, user); /* user: pointer to deletee */
HASH_DEL(users, user); /* user: pointer to deletee */
free(user);
}
void delete_all()
{
struct my_struct *current_user, *tmp;
struct my_struct *current_user;
struct my_struct *tmp;
HASH_ITER(hh, users, current_user, tmp) {
HASH_DEL(users,current_user); /* delete it (users advances to next) */
free(current_user); /* free it */
HASH_DEL(users, current_user); /* delete it (users advances to next) */
free(current_user); /* free it */
}
}
@@ -52,41 +53,45 @@ void print_users()
{
struct my_struct *s;
for(s=users; s != NULL; s=(struct my_struct*)(s->hh.next)) {
for (s = users; s != NULL; s = (struct my_struct*)(s->hh.next)) {
printf("user id %d: name %s\n", s->id, s->name);
}
}
int name_sort(struct my_struct *a, struct my_struct *b)
int by_name(const struct my_struct *a, const struct my_struct *b)
{
return strcmp(a->name,b->name);
return strcmp(a->name, b->name);
}
int id_sort(struct my_struct *a, struct my_struct *b)
int by_id(const struct my_struct *a, const struct my_struct *b)
{
return (a->id - b->id);
}
void sort_by_name()
const char *getl(const char *prompt)
{
HASH_SORT(users, name_sort);
}
void sort_by_id()
{
HASH_SORT(users, id_sort);
static char buf[21];
char *p;
printf("%s? ", prompt); fflush(stdout);
p = fgets(buf, sizeof(buf), stdin);
if (p == NULL || (p = strchr(buf, '\n')) == NULL) {
puts("Invalid input!");
exit(EXIT_FAILURE);
}
*p = '\0';
return buf;
}
int main()
{
char in[10];
int id=1, running=1;
int id = 1;
int running = 1;
struct my_struct *s;
unsigned num_users;
int temp;
while (running) {
printf(" 1. add user\n");
printf(" 2. add/rename user by id\n");
printf(" 2. add or rename user by id\n");
printf(" 3. find user\n");
printf(" 4. delete user\n");
printf(" 5. delete all users\n");
@@ -95,27 +100,20 @@ int main()
printf(" 8. print users\n");
printf(" 9. count users\n");
printf("10. quit\n");
gets(in);
switch(atoi(in)) {
switch (atoi(getl("Command"))) {
case 1:
printf("name?\n");
add_user(id++, gets(in));
add_user(id++, getl("Name (20 char max)"));
break;
case 2:
printf("id?\n");
gets(in);
id = atoi(in);
printf("name?\n");
add_user(id, gets(in));
temp = atoi(getl("ID"));
add_user(temp, getl("Name (20 char max)"));
break;
case 3:
printf("id?\n");
s = find_user(atoi(gets(in)));
s = find_user(atoi(getl("ID to find")));
printf("user: %s\n", s ? s->name : "unknown");
break;
case 4:
printf("id?\n");
s = find_user(atoi(gets(in)));
s = find_user(atoi(getl("ID to delete")));
if (s) {
delete_user(s);
} else {
@@ -126,20 +124,20 @@ int main()
delete_all();
break;
case 6:
sort_by_name();
HASH_SORT(users, by_name);
break;
case 7:
sort_by_id();
HASH_SORT(users, by_id);
break;
case 8:
print_users();
break;
case 9:
num_users=HASH_COUNT(users);
printf("there are %u users\n", num_users);
temp = HASH_COUNT(users);
printf("there are %d users\n", temp);
break;
case 10:
running=0;
running = 0;
break;
}
}

View File

@@ -1,5 +1,5 @@
/*
Copyright (c) 2005-2018, Troy D. Hanson http://troydhanson.github.com/uthash/
Copyright (c) 2005-2021, Troy D. Hanson http://troydhanson.github.io/uthash/
All rights reserved.
Redistribution and use in source and binary forms, with or without

View File

@@ -12,9 +12,11 @@ sub usage {
usage if ((@ARGV == 0) or ($ARGV[0] eq '-h'));
my @exes = glob "$FindBin::Bin/keystat.???";
my @exes = glob "'$FindBin::Bin/keystat.???'";
my %stats;
for my $exe (@exes) {
$exe =~ s/\ /\\ /g;
$stats{$exe} = `$exe @ARGV`;
delete $stats{$exe} if ($? != 0); # omit hash functions that fail to produce stats (nx)
}

View File

@@ -47,5 +47,8 @@ int main()
HASH_FIND(alth,altusers,&i,sizeof(int),tmp);
printf("%d %s in alth\n", i, (tmp != NULL) ? "found" : "not found");
HASH_CLEAR(hh, users);
HASH_CLEAR(alth, altusers);
return 0;
}

View File

@@ -7,15 +7,16 @@
/* Set up macros for alternative malloc/free functions */
#undef uthash_malloc
#undef uthash_free
#undef uthash_memcmp
#undef uthash_strlen
#undef uthash_bzero
#define uthash_malloc(sz) alt_malloc(sz)
#define uthash_free(ptr,sz) alt_free(ptr,sz)
#define uthash_memcmp(a,b,n) alt_memcmp(a,b,n)
#define uthash_strlen(s) ..fail_to_compile..
#define uthash_bzero(a,n) alt_bzero(a,n)
#undef HASH_KEYCMP
#define HASH_KEYCMP(a,b,n) alt_keycmp(a,b,n)
typedef struct example_user_t {
int id;
int cookie;
@@ -41,10 +42,10 @@ static void alt_free(void *ptr, size_t sz)
free(ptr);
}
static int alt_memcmp_count = 0;
static int alt_memcmp(const void *a, const void *b, size_t n)
static int alt_keycmp_count = 0;
static int alt_keycmp(const void *a, const void *b, size_t n)
{
++alt_memcmp_count;
++alt_keycmp_count;
return memcmp(a,b,n);
}
@@ -115,7 +116,7 @@ int main()
#else
assert(alt_bzero_count == 2);
#endif
assert(alt_memcmp_count == 10);
assert(alt_keycmp_count == 10);
assert(alt_malloc_balance == 0);
return 0;
}

View File

@@ -3,7 +3,7 @@
#include "uthash.h"
// this is an example of how to do a LRU cache in C using uthash
// http://troydhanson.github.com/uthash/
// http://troydhanson.github.io/uthash/
// by Jehiah Czebotar 2011 - jehiah@gmail.com
// this code is in the public domain http://unlicense.org/

View File

@@ -8,8 +8,8 @@ int main()
char V_NeedleStr[] = "needle\0s";
long *V_KMP_Table;
long V_FindPos;
size_t V_StartPos;
size_t V_FindCnt;
size_t V_StartPos = 0;
size_t V_FindCnt = 0;
utstring_new(s);
@@ -24,9 +24,6 @@ int main()
if (V_KMP_Table != NULL) {
_utstring_BuildTable(utstring_body(t), utstring_len(t), V_KMP_Table);
V_FindCnt = 0;
V_FindPos = 0;
V_StartPos = 0;
do {
V_FindPos = _utstring_find(utstring_body(s) + V_StartPos,
utstring_len(s) - V_StartPos,

View File

@@ -9,7 +9,7 @@ int main()
long *V_KMP_Table;
long V_FindPos;
size_t V_StartPos;
size_t V_FindCnt;
size_t V_FindCnt = 0;
utstring_new(s);
@@ -24,8 +24,6 @@ int main()
if (V_KMP_Table != NULL) {
_utstring_BuildTableR(utstring_body(t), utstring_len(t), V_KMP_Table);
V_FindCnt = 0;
V_FindPos = 0;
V_StartPos = utstring_len(s) - 1;
do {
V_FindPos = _utstring_findR(utstring_body(s),

View File

@@ -9,22 +9,22 @@ alt_strlen
alt_strlen
alt_strlen
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp
alt_strlen
alt_memcmp
alt_keycmp

View File

@@ -8,9 +8,9 @@
/* This is mostly a copy of test6.c. */
#undef uthash_memcmp
#undef HASH_KEYCMP
#undef uthash_strlen
#define uthash_memcmp(a,b,n) alt_memcmp(a,b,n)
#define HASH_KEYCMP(a,b,n) alt_keycmp(a,b,n)
#define uthash_strlen(s) alt_strlen(s)
typedef struct example_user_t {
@@ -19,9 +19,9 @@ typedef struct example_user_t {
UT_hash_handle hh;
} example_user_t;
static int alt_memcmp(const void *a, const void *b, size_t n)
static int alt_keycmp(const void *a, const void *b, size_t n)
{
puts("alt_memcmp");
puts("alt_keycmp");
return memcmp(a,b,n);
}

View File

@@ -39,43 +39,39 @@ static void alt_fatal(char const * s) {
longjmp(j_buf, 1);
}
static example_user_t * init_user(int need_malloc_cnt) {
users = 0;
static void init_users(int need_malloc_cnt) {
users = NULL;
example_user_t * user = (example_user_t*)malloc(sizeof(example_user_t));
user->id = user_id;
is_fatal = 0;
malloc_cnt = need_malloc_cnt;
/* printf("adding to hash...\n"); */
if (!setjmp(j_buf)) {
HASH_ADD_INT(users, id, user);
} else {
free(user);
}
return user;
}
int main()
{
example_user_t *user;
#define init(a) do { \
} while(0)
example_user_t * user;
user = init_user(3); /* bloom filter must fail */
init_users(3); /* bloom filter must fail */
if (!is_fatal) {
printf("fatal not called after bloom failure\n");
}
user = init_user(2); /* bucket creation must fail */
init_users(2); /* bucket creation must fail */
if (!is_fatal) {
printf("fatal not called after bucket creation failure\n");
}
user = init_user(1); /* table creation must fail */
init_users(1); /* table creation must fail */
if (!is_fatal) {
printf("fatal not called after table creation failure\n");
}
user = init_user(4); /* hash must create OK */
init_users(4); /* hash must create OK */
if (is_fatal) {
printf("fatal error when creating hash normally\n");
/* bad idea to continue running */
@@ -83,19 +79,20 @@ int main()
}
/* let's add users until expansion fails */
users = 0;
users = NULL;
malloc_cnt = 4;
while (1) {
user = (example_user_t*)malloc(sizeof(example_user_t));
user->id = user_id;
if (user_id++ == 1000) {
printf("there is no way 1000 iterations didn't require realloc\n");
break;
}
user = (example_user_t*)malloc(sizeof(example_user_t));
user->id = user_id;
if (!setjmp(j_buf)) {
HASH_ADD_INT(users, id, user);
} else {
free(user);
}
malloc_cnt = 0;
if (malloc_failed) {
if (!is_fatal) {
@@ -108,12 +105,12 @@ int main()
/* we can't really do anything, the hash is not in consistent
* state, so assume this is a success. */
break;
}
malloc_cnt = 0;
}
HASH_CLEAR(hh, users);
printf("End\n");
return 0;
}

40
dependencies/uthash/tests/test96.ans vendored Normal file
View File

@@ -0,0 +1,40 @@
time 56 not found, inserting it
time 7 not found, inserting it
time 10 not found, inserting it
time 39 not found, inserting it
time 82 found with value 10
time 15 found with value 39
time 31 found with value 7
time 26 not found, inserting it
time 51 found with value 39
time 83 not found, inserting it
time 46 found with value 10
time 92 found with value 56
time 49 not found, inserting it
time 25 found with value 49
time 80 found with value 56
time 54 not found, inserting it
time 97 found with value 49
time 9 not found, inserting it
time 34 found with value 10
time 86 found with value 26
time 87 found with value 39
time 28 not found, inserting it
time 13 found with value 49
time 91 found with value 7
time 95 found with value 83
time 63 found with value 39
time 71 found with value 83
time 100 found with value 28
time 44 found with value 56
time 42 found with value 54
time 16 found with value 28
time 32 found with value 56
time 6 found with value 54
time 85 found with value 49
time 40 found with value 28
time 20 found with value 56
time 18 found with value 54
time 99 found with value 39
time 22 found with value 10
time 1 found with value 49

48
dependencies/uthash/tests/test96.c vendored Normal file
View File

@@ -0,0 +1,48 @@
#include <stdio.h>
#include <stdlib.h>
#define HASH_FUNCTION(a,n,hv) (hv = clockface_hash(*(const int*)(a)))
#define HASH_KEYCMP(a,b,n) clockface_neq(*(const int*)(a), *(const int*)(b))
#include "uthash.h"
struct clockface {
int time;
UT_hash_handle hh;
};
int clockface_hash(int time)
{
return (time % 4);
}
int clockface_neq(int t1, int t2)
{
return ((t1 % 12) != (t2 % 12));
}
int main()
{
int random_data[] = {
56, 7, 10, 39, 82, 15, 31, 26, 51, 83,
46, 92, 49, 25, 80, 54, 97, 9, 34, 86,
87, 28, 13, 91, 95, 63, 71, 100, 44, 42,
16, 32, 6, 85, 40, 20, 18, 99, 22, 1
};
struct clockface *times = NULL;
for (int i=0; i < 40; ++i) {
struct clockface *elt = (struct clockface *)malloc(sizeof(*elt));
struct clockface *found = NULL;
elt->time = random_data[i];
HASH_FIND_INT(times, &elt->time, found);
if (found) {
printf("time %d found with value %d\n", elt->time, found->time);
} else {
printf("time %d not found, inserting it\n", elt->time);
HASH_ADD_INT(times, time, elt);
}
}
return 0;
}

View File

@@ -3,11 +3,16 @@
Some ready-2-use/ready-2-extend examples/utils.
All examples are prefixed with their used LANG.
## c-analysed
A feature extractor useful for ML/DL use cases.
It generates CSV files from flow "analyse" events.
Used also by `tests/run_tests.sh` if available.
## c-captured
A capture daemon suitable for low-resource devices.
It saves flows that were guessed/undetected/risky/midstream to a PCAP file for manual analysis.
Basicially a combination of `py-flow-undetected-to-pcap` and `py-risky-flow-to-pcap`.
## c-collectd
@@ -17,25 +22,47 @@ A collecd-exec compatible middleware that gathers statistic values from nDPId.
Tiny nDPId json dumper. Does not provide any useful funcationality besides dumping parsed JSON objects.
## go-dashboard
## c-simple
A discontinued tty UI nDPId dashboard. I've figured out that Go + UI is a bad idea, in particular if performance is a concern.
Integration example that verifies flow timeouts on SIGUSR1.
## js-rt-analyzer
[nDPId-rt-analyzer](https://gitlab.com/verzulli/ndpid-rt-analyzer.git)
## js-rt-analyzer-frontend
[nDPId-rt-analyzer-frontend](https://gitlab.com/verzulli/ndpid-rt-analyzer-frontend.git)
## py-flow-info
Prints prettyfied information about flow events.
Console friendly, colorful, prettyfied event printer.
Required by `tests/run_tests.sh`
## py-flow-undetected-to-pcap
## py-machine-learning
Captures and saves undetected flows to a PCAP file.
Use sklearn together with CSVs created with **c-analysed** to train and predict DPI detections.
Try it with: `./examples/py-machine-learning/sklearn_random_forest.py --csv ./ndpi-analysed.csv --proto-class tls.youtube --proto-class tls.github --proto-class tls.spotify --proto-class tls.facebook --proto-class tls.instagram --proto-class tls.doh_dot --proto-class quic --proto-class icmp`
This way you should get 9 different classification classes.
You may notice that some classes e.g. TLS protocol classifications may have a higher false-negative rate.
Unfortunately, I can not provide any datasets due to some privacy concerns.
But you can use a [pre-trained model](https://drive.google.com/file/d/1KEwbP-Gx7KJr54wNoa63I56VI4USCAPL/view?usp=sharing) with `--load-model`.
## py-flow-dashboard
A realtime web based graph using Plotly/Dash.
Probably the most informative example.
## py-flow-multiprocess
Simple Python Multiprocess example spawning two worker processes, one connecting to nDPIsrvd and one printing flow id's to STDOUT.
## py-json-stdout
Dump received and parsed JSON strings.
## py-risky-flow-to-pcap
Captures and saves risky flows to a PCAP file.
Dump received and parsed JSON objects.
## py-schema-validation
@@ -47,7 +74,3 @@ Required by `tests/run_tests.sh`
Validate nDPId JSON strings against internal event semantics.
Required by `tests/run_tests.sh`
## py-ja3-checker
Captures JA3 hashes from nDPIsrvd and checks them against known hashes from [ja3er.com](https://ja3er.com).

View File

@@ -0,0 +1,628 @@
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <syslog.h>
#include <unistd.h>
#include "nDPIsrvd.h"
#include "utils.h"
#define MIN(a, b) (a > b ? b : a)
#define BUFFER_REMAINING(siz) (NETWORK_BUFFER_MAX_SIZE / 3 - siz)
typedef char csv_buf_t[(NETWORK_BUFFER_MAX_SIZE / 3) + 1];
static int main_thread_shutdown = 0;
static struct nDPIsrvd_socket * sock = NULL;
static char * pidfile = NULL;
static char * serv_optarg = NULL;
static char * user = NULL;
static char * group = NULL;
static char * csv_outfile = NULL;
static FILE * csv_fp = NULL;
#ifdef ENABLE_MEMORY_PROFILING
void nDPIsrvd_memprof_log_alloc(size_t alloc_size)
{
(void)alloc_size;
}
void nDPIsrvd_memprof_log_free(size_t free_size)
{
(void)free_size;
}
void nDPIsrvd_memprof_log(char const * const format, ...)
{
va_list ap;
va_start(ap, format);
fprintf(stderr, "%s", "nDPIsrvd MemoryProfiler: ");
vfprintf(stderr, format, ap);
fprintf(stderr, "%s\n", "");
va_end(ap);
}
#endif
static void nDPIsrvd_write_flow_info_cb(struct nDPIsrvd_socket const * sock,
struct nDPIsrvd_instance const * instance,
struct nDPIsrvd_thread_data const * thread_data,
struct nDPIsrvd_flow const * flow,
void * user_data)
{
(void)sock;
(void)instance;
(void)user_data;
fprintf(stderr,
"[Thread %2d][Flow %5llu][ptr: "
#ifdef __LP64__
"0x%016llx"
#else
"0x%08lx"
#endif
"][last-seen: %13llu][idle-time: %7llu][time-until-timeout: %7llu]\n",
flow->thread_id,
flow->id_as_ull,
#ifdef __LP64__
(unsigned long long int)flow,
#else
(unsigned long int)flow,
#endif
flow->last_seen,
flow->idle_time,
(flow->last_seen + flow->idle_time >= thread_data->most_recent_flow_time
? flow->last_seen + flow->idle_time - thread_data->most_recent_flow_time
: 0));
}
static void nDPIsrvd_verify_flows_cb(struct nDPIsrvd_thread_data const * const thread_data,
struct nDPIsrvd_flow const * const flow,
void * user_data)
{
(void)user_data;
if (thread_data != NULL)
{
if (flow->last_seen + flow->idle_time >= thread_data->most_recent_flow_time)
{
fprintf(stderr,
"Thread %d / %d, Flow %llu verification failed\n",
thread_data->thread_key,
flow->thread_id,
flow->id_as_ull);
}
else
{
fprintf(stderr,
"Thread %d / %d, Flow %llu verification failed, diff: %llu\n",
thread_data->thread_key,
flow->thread_id,
flow->id_as_ull,
thread_data->most_recent_flow_time - flow->last_seen + flow->idle_time);
}
}
else
{
fprintf(stderr, "Thread [UNKNOWN], Flow %llu verification failed\n", flow->id_as_ull);
}
}
static void sighandler(int signum)
{
struct nDPIsrvd_instance * current_instance;
struct nDPIsrvd_instance * itmp;
int verification_failed = 0;
fflush(csv_fp);
if (signum == SIGUSR1)
{
nDPIsrvd_flow_info(sock, nDPIsrvd_write_flow_info_cb, NULL);
HASH_ITER(hh, sock->instance_table, current_instance, itmp)
{
if (nDPIsrvd_verify_flows(current_instance, nDPIsrvd_verify_flows_cb, NULL) != 0)
{
fprintf(stderr, "Flow verification failed for instance %d\n", current_instance->alias_source_key);
verification_failed = 1;
}
}
if (verification_failed == 0)
{
fprintf(stderr, "%s\n", "Flow verification succeeded.");
}
else
{
/* FATAL! */
exit(EXIT_FAILURE);
}
}
else if (main_thread_shutdown == 0)
{
main_thread_shutdown = 1;
}
}
static void csv_buf_add(csv_buf_t buf, size_t * const csv_buf_used, char const * const str, size_t siz_len)
{
size_t len;
if (siz_len > 0 && str != NULL)
{
len = MIN(BUFFER_REMAINING(*csv_buf_used), siz_len);
if (len == 0)
{
return;
}
strncat(buf, str, len);
}
else
{
len = 0;
}
*csv_buf_used += len;
if (BUFFER_REMAINING(*csv_buf_used) > 0)
{
buf[*csv_buf_used] = ',';
(*csv_buf_used)++;
}
buf[*csv_buf_used] = '\0';
}
static int json_value_to_csv(
struct nDPIsrvd_socket * const sock, csv_buf_t buf, size_t * const csv_buf_used, char const * const json_key, ...)
{
va_list ap;
nDPIsrvd_hashkey key;
struct nDPIsrvd_json_token const * token;
size_t val_length = 0;
char const * val;
int ret = 0;
va_start(ap, json_key);
key = nDPIsrvd_vbuild_jsmn_key(json_key, ap);
va_end(ap);
token = nDPIsrvd_find_token(sock, key);
if (token == NULL)
{
ret++;
}
val = TOKEN_GET_VALUE(sock, token, &val_length);
if (val == NULL)
{
ret++;
}
csv_buf_add(buf, csv_buf_used, val, val_length);
return ret;
}
static int json_array_to_csv(
struct nDPIsrvd_socket * const sock, csv_buf_t buf, size_t * const csv_buf_used, char const * const json_key, ...)
{
va_list ap;
nDPIsrvd_hashkey key;
struct nDPIsrvd_json_token const * token;
int ret = 0;
va_start(ap, json_key);
key = nDPIsrvd_vbuild_jsmn_key(json_key, ap);
va_end(ap);
token = nDPIsrvd_find_token(sock, key);
if (token == NULL)
{
ret++;
csv_buf_add(buf, csv_buf_used, NULL, 0);
}
{
size_t token_count = 0;
struct nDPIsrvd_json_token next = {};
csv_buf_add(buf, csv_buf_used, "\"", 1);
buf[--(*csv_buf_used)] = '\0';
while (nDPIsrvd_token_iterate(sock, token, &next) == 0)
{
size_t val_length = 0;
char const * const val = TOKEN_GET_VALUE(sock, &next, &val_length);
csv_buf_add(buf, csv_buf_used, val, val_length);
token_count++;
}
if (token_count > 0)
{
buf[--(*csv_buf_used)] = '\0';
}
csv_buf_add(buf, csv_buf_used, "\"", 1);
}
return ret;
}
static enum nDPIsrvd_callback_return simple_json_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow)
{
csv_buf_t buf;
size_t csv_buf_used = 0;
(void)instance;
(void)thread_data;
if (flow == NULL)
{
return CALLBACK_OK;
}
struct nDPIsrvd_json_token const * const flow_event_name = TOKEN_GET_SZ(sock, "flow_event_name");
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "analyse") == 0)
{
return CALLBACK_OK;
}
if (TOKEN_GET_SZ(sock, "data_analysis") == NULL)
{
return CALLBACK_ERROR;
}
buf[0] = '\0';
json_value_to_csv(sock, buf, &csv_buf_used, "flow_datalink", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "l3_proto", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "src_ip", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "dst_ip", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "l4_proto", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "src_port", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "dst_port", NULL);
if (json_value_to_csv(sock, buf, &csv_buf_used, "flow_state", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_src_packets_processed", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_dst_packets_processed", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_first_seen", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_src_last_pkt_time", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_dst_last_pkt_time", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_src_min_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_dst_min_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_src_max_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_dst_max_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_src_tot_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "flow_dst_tot_l4_payload_len", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "midstream", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "min", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "avg", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "max", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "stddev", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "var", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "ent", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "iat", "data", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "min", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "avg", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "max", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "stddev", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "var", NULL) != 0 ||
json_value_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "ent", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "pktlen", "data", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "bins", "c_to_s", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "bins", "s_to_c", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "directions", NULL) != 0)
{
return CALLBACK_ERROR;
}
if (json_array_to_csv(sock, buf, &csv_buf_used, "data_analysis", "entropies", NULL) != 0)
{
return CALLBACK_ERROR;
}
json_value_to_csv(sock, buf, &csv_buf_used, "ndpi", "proto", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "ndpi", "proto_id", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "ndpi", "encrypted", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "ndpi", "breed", NULL);
json_value_to_csv(sock, buf, &csv_buf_used, "ndpi", "category", NULL);
{
struct nDPIsrvd_json_token const * const token = TOKEN_GET_SZ(sock, "ndpi", "confidence");
struct nDPIsrvd_json_token const * current = NULL;
int next_child_index = -1;
if (token == NULL)
{
csv_buf_add(buf, &csv_buf_used, NULL, 0);
csv_buf_add(buf, &csv_buf_used, NULL, 0);
}
else
{
while ((current = nDPIsrvd_get_next_token(sock, token, &next_child_index)) != NULL)
{
size_t key_length = 0, value_length = 0;
char const * const key = TOKEN_GET_KEY(sock, current, &key_length);
char const * const value = TOKEN_GET_VALUE(sock, current, &value_length);
csv_buf_add(buf, &csv_buf_used, key, key_length);
csv_buf_add(buf, &csv_buf_used, value, value_length);
}
}
}
{
csv_buf_t risks;
size_t csv_risks_used = 0;
struct nDPIsrvd_json_token const * const flow_risk = TOKEN_GET_SZ(sock, "ndpi", "flow_risk");
struct nDPIsrvd_json_token const * current = NULL;
int next_child_index = -1;
risks[csv_risks_used++] = '"';
risks[csv_risks_used] = '\0';
if (flow_risk != NULL)
{
while ((current = nDPIsrvd_get_next_token(sock, flow_risk, &next_child_index)) != NULL)
{
size_t key_length = 0;
char const * const key = TOKEN_GET_KEY(sock, current, &key_length);
csv_buf_add(risks, &csv_risks_used, key, key_length);
}
}
if (csv_risks_used > 1)
{
risks[csv_risks_used - 1] = '"';
}
else if (BUFFER_REMAINING(csv_risks_used) > 0)
{
risks[csv_risks_used++] = '"';
}
csv_buf_add(buf, &csv_buf_used, risks, csv_risks_used);
}
if (csv_buf_used > 0 && buf[csv_buf_used - 1] == ',')
{
buf[--csv_buf_used] = '\0';
}
fprintf(csv_fp, "%.*s\n", (int)csv_buf_used, buf);
return CALLBACK_OK;
}
static void print_usage(char const * const arg0)
{
static char const usage[] =
"Usage: %s "
"[-d] [-p pidfile] [-s host]\n"
"\t \t[-u user] [-g group] [-o csv-outfile]\n\n"
"\t-d\tForking into background after initialization.\n"
"\t-p\tWrite the daemon PID to the given file path.\n"
"\t-s\tDestination where nDPIsrvd is listening on.\n"
"\t \tCan be either a path to UNIX socket or an IPv4/TCP-Port IPv6/TCP-Port tuple.\n"
"\t-u\tChange user.\n"
"\t-g\tChange group.\n"
"\t-o\tSpecify the CSV output file for analysis results\n\n";
fprintf(stderr, usage, arg0);
}
static int parse_options(int argc, char ** argv)
{
int opt;
while ((opt = getopt(argc, argv, "hdp:s:u:g:o:")) != -1)
{
switch (opt)
{
case 'd':
daemonize_enable();
break;
case 'p':
free(pidfile);
pidfile = strdup(optarg);
break;
case 's':
free(serv_optarg);
serv_optarg = strdup(optarg);
break;
case 'u':
free(user);
user = strdup(optarg);
break;
case 'g':
free(group);
group = strdup(optarg);
break;
case 'o':
free(csv_outfile);
csv_outfile = strdup(optarg);
break;
default:
print_usage(argv[0]);
return 1;
}
}
if (csv_outfile == NULL)
{
fprintf(stderr, "%s: Missing CSV output file (`-o')\n", argv[0]);
return 1;
}
opt = 0;
if (access(csv_outfile, F_OK) != 0 && errno == ENOENT)
{
opt = 1;
}
csv_fp = fopen(csv_outfile, "a+");
if (csv_fp == NULL)
{
fprintf(stderr, "%s: Could not open file `%s' for appending: %s\n", argv[0], csv_outfile, strerror(errno));
return 1;
}
if (opt != 0)
{
fprintf(csv_fp,
"flow_datalink,l3_proto,src_ip,dst_ip,l4_proto,src_port,dst_port,flow_state,flow_src_packets_processed,"
"flow_dst_packets_processed,flow_first_seen,flow_src_last_pkt_time,flow_dst_last_pkt_time,flow_src_min_"
"l4_payload_len,flow_dst_min_l4_payload_len,flow_src_max_l4_payload_len,flow_dst_max_l4_payload_len,"
"flow_src_tot_l4_payload_len,flow_dst_tot_l4_payload_len,midstream,iat_min,iat_avg,iat_max,iat_stddev,"
"iat_var,iat_ent,iat_data,pktlen_min,pktlen_avg,pktlen_max,pktlen_stddev,pktlen_var,pktlen_ent,pktlen_"
"data,bins_c_to_s,bins_s_to_c,directions,entropies,proto,proto_id,encrypted,breed,category,"
"confidence_id,confidence,risks\n");
}
if (serv_optarg == NULL)
{
serv_optarg = strdup(DISTRIBUTOR_UNIX_SOCKET);
}
if (nDPIsrvd_setup_address(&sock->address, serv_optarg) != 0)
{
fprintf(stderr, "%s: Could not parse address `%s'\n", argv[0], serv_optarg);
return 1;
}
if (optind < argc)
{
fprintf(stderr, "Unexpected argument after options\n\n");
print_usage(argv[0]);
return 1;
}
return 0;
}
static int mainloop(void)
{
enum nDPIsrvd_read_return read_ret = READ_OK;
while (main_thread_shutdown == 0)
{
read_ret = nDPIsrvd_read(sock);
if (errno == EINTR)
{
continue;
}
if (read_ret == READ_TIMEOUT)
{
printf("No data received during the last %llu second(s).\n",
(long long unsigned int)sock->read_timeout.tv_sec);
continue;
}
if (read_ret != READ_OK)
{
printf("Could not read from socket: %s\n", nDPIsrvd_enum_to_string(read_ret));
break;
}
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(sock);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
printf("Could not parse json string: %s\n", nDPIsrvd_enum_to_string(parse_ret));
break;
}
}
if (main_thread_shutdown == 0 && read_ret != READ_OK)
{
return 1;
}
return 0;
}
int main(int argc, char ** argv)
{
sock = nDPIsrvd_socket_init(0, 0, 0, 0, simple_json_callback, NULL, NULL);
if (sock == NULL)
{
return 1;
}
if (parse_options(argc, argv) != 0)
{
return 1;
}
printf("Recv buffer size: %u\n", NETWORK_BUFFER_MAX_SIZE);
printf("Connecting to `%s'..\n", serv_optarg);
if (nDPIsrvd_connect(sock) != CONNECT_OK)
{
fprintf(stderr, "%s: nDPIsrvd socket connect to %s failed!\n", argv[0], serv_optarg);
nDPIsrvd_socket_free(&sock);
return 1;
}
signal(SIGUSR1, sighandler);
signal(SIGINT, sighandler);
signal(SIGTERM, sighandler);
signal(SIGPIPE, sighandler);
if (daemonize_with_pidfile(pidfile) != 0)
{
return 1;
}
openlog("nDPIsrvd-analyzed", LOG_CONS, LOG_DAEMON);
errno = 0;
if (user != NULL && change_user_group(user, group, pidfile, csv_outfile /* :D */, NULL) != 0)
{
if (errno != 0)
{
syslog(LOG_DAEMON | LOG_ERR, "Change user/group failed: %s", strerror(errno));
}
else
{
syslog(LOG_DAEMON | LOG_ERR, "Change user/group failed.");
}
return 1;
}
if (nDPIsrvd_set_read_timeout(sock, 180, 0) != 0)
{
return 1;
}
int retval = mainloop();
nDPIsrvd_socket_free(&sock);
daemonize_shutdown(pidfile);
closelog();
fflush(csv_fp);
fclose(csv_fp);
return retval;
}

View File

@@ -6,6 +6,7 @@
#include <netinet/udp.h>
#include <pcap/pcap.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
@@ -16,6 +17,9 @@
#include <time.h>
#include <unistd.h>
#include <ndpi_typedefs.h>
#include <ndpi_api.h>
#include "nDPIsrvd.h"
#include "utarray.h"
#include "utils.h"
@@ -29,7 +33,8 @@ struct packet_data
nDPIsrvd_ull packet_ts_usec;
nDPIsrvd_ull packet_len;
int base64_packet_size;
union {
union
{
char * base64_packet;
char const * base64_packet_const;
};
@@ -45,6 +50,7 @@ struct flow_user_data
uint8_t midstream;
nDPIsrvd_ull flow_datalink;
nDPIsrvd_ull flow_max_packets;
nDPIsrvd_ull flow_tot_l4_payload_len;
UT_array * packets;
};
@@ -61,8 +67,32 @@ static char * group = NULL;
static char * datadir = NULL;
static uint8_t process_guessed = 0;
static uint8_t process_undetected = 0;
static uint8_t process_risky = 0;
static ndpi_risk process_risky = NDPI_NO_RISK;
static uint8_t process_midstream = 0;
static uint8_t ignore_empty_flows = 0;
#ifdef ENABLE_MEMORY_PROFILING
void nDPIsrvd_memprof_log_alloc(size_t alloc_size)
{
(void)alloc_size;
}
void nDPIsrvd_memprof_log_free(size_t free_size)
{
(void)free_size;
}
void nDPIsrvd_memprof_log(char const * const format, ...)
{
va_list ap;
va_start(ap, format);
fprintf(stderr, "%s", "nDPIsrvd MemoryProfiler: ");
vfprintf(stderr, format, ap);
fprintf(stderr, "%s\n", "");
va_end(ap);
}
#endif
static void packet_data_copy(void * dst, const void * src)
{
@@ -93,6 +123,35 @@ static void packet_data_dtor(void * elt)
static const UT_icd packet_data_icd = {sizeof(struct packet_data), NULL, packet_data_copy, packet_data_dtor};
static void set_ndpi_risk(ndpi_risk * const risk, nDPIsrvd_ull risk_to_add)
{
if (risk_to_add == 0)
{
*risk = (ndpi_risk)-1;
}
else
{
*risk |= 1ull << --risk_to_add;
}
}
static void unset_ndpi_risk(ndpi_risk * const risk, nDPIsrvd_ull risk_to_del)
{
if (risk_to_del == 0)
{
*risk = 0;
}
else
{
*risk &= ~(1ull << --risk_to_del);
}
}
static int has_ndpi_risk(ndpi_risk * const risk, nDPIsrvd_ull risk_to_check)
{
return (*risk & (1ull << --risk_to_check)) != 0;
}
static char * generate_pcap_filename(struct nDPIsrvd_flow const * const flow,
struct flow_user_data const * const flow_user,
char * const dest,
@@ -288,11 +347,16 @@ static enum nDPIsrvd_conversion_return perror_ull(enum nDPIsrvd_conversion_retur
}
static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow)
{
(void)instance;
(void)thread_data;
if (flow == NULL)
{
return CALLBACK_OK; // We do not care for non flow/packet-flow events for NOW.
return CALLBACK_OK; // We do not care for non-flow events for NOW except for packet-flow events.
}
struct flow_user_data * const flow_user = (struct flow_user_data *)flow->flow_user_data;
@@ -302,12 +366,14 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
return CALLBACK_OK;
}
if (TOKEN_VALUE_EQUALS_SZ(TOKEN_GET_SZ(sock, "packet_event_name"), "packet-flow") != 0)
if (TOKEN_VALUE_EQUALS_SZ(sock, TOKEN_GET_SZ(sock, "packet_event_name"), "packet-flow") != 0)
{
struct nDPIsrvd_json_token const * const pkt = TOKEN_GET_SZ(sock, "pkt");
if (pkt == NULL)
{
return CALLBACK_ERROR;
syslog(LOG_DAEMON | LOG_ERR, "%s", "No packet data available.");
syslog(LOG_DAEMON | LOG_ERR, "JSON String: '%.*s'", nDPIsrvd_json_buffer_length(sock), nDPIsrvd_json_buffer_string(sock));
return CALLBACK_OK;
}
if (flow_user->packets == NULL)
{
@@ -315,65 +381,94 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
}
if (flow_user->packets == NULL)
{
syslog(LOG_DAEMON | LOG_ERR, "%s", "Memory allocation for captured packets failed.");
return CALLBACK_ERROR;
}
nDPIsrvd_ull pkt_ts_sec = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "pkt_ts_sec"), &pkt_ts_sec), "pkt_ts_sec");
nDPIsrvd_ull pkt_ts_usec = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "pkt_ts_usec"), &pkt_ts_usec), "pkt_ts_usec");
nDPIsrvd_ull thread_ts_usec = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "thread_ts_usec"), &thread_ts_usec), "thread_ts_usec");
nDPIsrvd_ull pkt_len = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "pkt_len"), &pkt_len), "pkt_len");
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "pkt_caplen"), &pkt_len), "pkt_caplen");
nDPIsrvd_ull pkt_l4_len = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "pkt_l4_len"), &pkt_l4_len), "pkt_l4_len");
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "pkt_l4_len"), &pkt_l4_len), "pkt_l4_len");
nDPIsrvd_ull pkt_l4_offset = 0ull;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "pkt_l4_offset"), &pkt_l4_offset), "pkt_l4_offset");
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "pkt_l4_offset"), &pkt_l4_offset), "pkt_l4_offset");
struct packet_data pd = {.packet_ts_sec = pkt_ts_sec,
.packet_ts_usec = pkt_ts_usec,
struct packet_data pd = {.packet_ts_sec = thread_ts_usec / (1000 * 1000),
.packet_ts_usec = (thread_ts_usec % (1000 * 1000)),
.packet_len = pkt_len,
.base64_packet_size = pkt->value_length,
.base64_packet_const = pkt->value};
.base64_packet_size = nDPIsrvd_get_token_size(sock, pkt),
.base64_packet_const = nDPIsrvd_get_token_value(sock, pkt)};
utarray_push_back(flow_user->packets, &pd);
}
{
struct nDPIsrvd_json_token const * const flow_event_name = TOKEN_GET_SZ(sock, "flow_event_name");
if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "new") != 0)
if (flow_event_name != NULL)
{
nDPIsrvd_ull nmb = 0;
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_src_tot_l4_payload_len"), &nmb),
"flow_src_tot_l4_payload_len");
flow_user->flow_tot_l4_payload_len += nmb;
nmb = 0;
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_dst_tot_l4_payload_len"), &nmb),
"flow_dst_tot_l4_payload_len");
flow_user->flow_tot_l4_payload_len += nmb;
}
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "new") != 0)
{
flow_user->flow_new_seen = 1;
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "flow_datalink"), &flow_user->flow_datalink),
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_datalink"), &flow_user->flow_datalink),
"flow_datalink");
perror_ull(TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "flow_max_packets"), &flow_user->flow_max_packets),
perror_ull(TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_max_packets"), &flow_user->flow_max_packets),
"flow_max_packets");
if (TOKEN_VALUE_EQUALS_SZ(TOKEN_GET_SZ(sock, "midstream"), "1") != 0)
if (TOKEN_VALUE_EQUALS_SZ(sock, TOKEN_GET_SZ(sock, "midstream"), "1") != 0)
{
flow_user->midstream = 1;
}
return CALLBACK_OK;
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "guessed") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "guessed") != 0)
{
flow_user->guessed = 1;
flow_user->detection_finished = 1;
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "not-detected") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "not-detected") != 0)
{
flow_user->detected = 0;
flow_user->detection_finished = 1;
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "detected") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detected") != 0 ||
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detection-update") != 0)
{
struct nDPIsrvd_json_token const * const flow_risk = TOKEN_GET_SZ(sock, "ndpi", "flow_risk");
struct nDPIsrvd_json_token const * current = NULL;
int next_child_index = -1;
flow_user->detected = 1;
flow_user->detection_finished = 1;
if (TOKEN_GET_SZ(sock, "flow_risk") != NULL)
if (flow_risk != NULL)
{
flow_user->risky = 1;
while ((current = nDPIsrvd_get_next_token(sock, flow_risk, &next_child_index)) != NULL)
{
nDPIsrvd_ull numeric_risk_value = (nDPIsrvd_ull)-1;
if (str_value_to_ull(TOKEN_GET_KEY(sock, current, NULL), &numeric_risk_value) == CONVERSION_OK &&
numeric_risk_value < NDPI_MAX_RISK && has_ndpi_risk(&process_risky, numeric_risk_value) != 0)
{
flow_user->risky = 1;
}
}
}
}
@@ -394,6 +489,7 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
(flow_user->midstream != 0 && process_midstream != 0)))
{
packet_data_print(flow_user->packets);
if (ignore_empty_flows == 0 || flow_user->flow_tot_l4_payload_len > 0)
{
char pcap_filename[PATH_MAX];
if (generate_pcap_filename(flow, flow_user, pcap_filename, sizeof(pcap_filename)) == NULL)
@@ -406,6 +502,7 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
#endif
if (packet_write_pcap_file(flow_user->packets, flow_user->flow_datalink, pcap_filename) != 0)
{
syslog(LOG_DAEMON | LOG_ERR, "Could not packet data to pcap file %s", pcap_filename);
return CALLBACK_ERROR;
}
}
@@ -418,23 +515,91 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
return CALLBACK_OK;
}
static void nDPIsrvd_write_flow_info_cb(struct nDPIsrvd_socket const * sock,
struct nDPIsrvd_instance const * instance,
struct nDPIsrvd_thread_data const * thread_data,
struct nDPIsrvd_flow const * flow,
void * user_data)
{
(void)sock;
(void)instance;
(void)thread_data;
(void)user_data;
struct flow_user_data const * const flow_user = (struct flow_user_data const *)flow->flow_user_data;
fprintf(stderr,
"[Flow %4llu][ptr: "
#ifdef __LP64__
"0x%016llx"
#else
"0x%08lx"
#endif
"][last-seen: %13llu][new-seen: %u][finished: %u][detected: %u][risky: "
"%u][total-L4-payload-length: "
"%4llu][packets-captured: %u]\n",
flow->id_as_ull,
#ifdef __LP64__
(unsigned long long int)flow,
#else
(unsigned long int)flow,
#endif
flow->last_seen,
flow_user->flow_new_seen,
flow_user->detection_finished,
flow_user->detected,
flow_user->risky,
flow_user->flow_tot_l4_payload_len,
flow_user->packets != NULL ? utarray_len(flow_user->packets) : 0);
syslog(LOG_DAEMON,
"[Flow %4llu][ptr: "
#ifdef __LP64__
"0x%016llx"
#else
"0x%08lx"
#endif
"][last-seen: %13llu][new-seen: %u][finished: %u][detected: %u][risky: "
"%u][total-L4-payload-length: "
"%4llu][packets-captured: %u]",
flow->id_as_ull,
#ifdef __LP64__
(unsigned long long int)flow,
#else
(unsigned long int)flow,
#endif
flow->last_seen,
flow_user->flow_new_seen,
flow_user->detection_finished,
flow_user->detected,
flow_user->risky,
flow_user->flow_tot_l4_payload_len,
flow_user->packets != NULL ? utarray_len(flow_user->packets) : 0);
}
static void sighandler(int signum)
{
(void)signum;
if (main_thread_shutdown == 0)
if (signum == SIGUSR1)
{
nDPIsrvd_flow_info(sock, nDPIsrvd_write_flow_info_cb, NULL);
}
else if (main_thread_shutdown == 0)
{
main_thread_shutdown = 1;
}
}
static void captured_flow_end_callback(struct nDPIsrvd_socket * const sock, struct nDPIsrvd_flow * const flow)
static void captured_flow_cleanup_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow,
enum nDPIsrvd_cleanup_reason reason)
{
(void)sock;
(void)instance;
(void)thread_data;
(void)reason;
#ifdef VERBOSE
printf("flow %llu end, remaining flows: %u\n", flow->id_as_ull, sock->flow_table->hh.tbl->num_items);
#endif
struct flow_user_data * const ud = (struct flow_user_data *)flow->flow_user_data;
if (ud != NULL && ud->packets != NULL)
{
@@ -443,13 +608,12 @@ static void captured_flow_end_callback(struct nDPIsrvd_socket * const sock, stru
}
}
static int parse_options(int argc, char ** argv)
static void print_usage(char const * const arg0)
{
int opt;
static char const usage[] =
"Usage: %s "
"[-d] [-p pidfile] [-s host] [-r rotate-every-n-seconds] [-u user] [-g group] [-D dir] [-G] [-U] [-R] [-M]\n\n"
"[-d] [-p pidfile] [-s host] [-r rotate-every-n-seconds]\n"
"\t \t[-u user] [-g group] [-D dir] [-G] [-U] [-R risk] [-M]\n\n"
"\t-d\tForking into background after initialization.\n"
"\t-p\tWrite the daemon PID to the given file path.\n"
"\t-s\tDestination where nDPIsrvd is listening on.\n"
@@ -460,10 +624,34 @@ static int parse_options(int argc, char ** argv)
"\t-D\tDatadir - Where to store PCAP files.\n"
"\t-G\tGuessed - Dump guessed flows to a PCAP file.\n"
"\t-U\tUndetected - Dump undetected flows to a PCAP file.\n"
"\t-R\tRisky - Dump risky flows to a PCAP file.\n"
"\t-M\tMidstream - Dump midstream flows to a PCAP file.\n";
"\t-R\tRisky - Dump risky flows to a PCAP file. See additional help below.\n"
"\t-M\tMidstream - Dump midstream flows to a PCAP file.\n"
"\t-E\tEmpty - Ignore flows w/o any layer 4 payload\n\n"
"\tPossible options for `-R' (can be specified multiple times, processed from left to right, ~ disables a "
"risk):\n"
"\t \tExample: -R0 -R~15 would enable all risks except risk with id 15\n";
while ((opt = getopt(argc, argv, "hdp:s:r:u:g:D:GURM")) != -1)
fprintf(stderr, usage, arg0);
#ifndef LIBNDPI_STATIC
fprintf(stderr, "\t\t%d - %s\n", 0, "Capture all risks");
#else
fprintf(stderr, "\t\t%d - %s\n\t\t", 0, "Capture all risks");
#endif
for (int risk = NDPI_NO_RISK + 1; risk < NDPI_MAX_RISK; ++risk)
{
#ifndef LIBNDPI_STATIC
fprintf(stderr, "\t\t%d - %s%s", risk, ndpi_risk2str(risk), (risk == NDPI_MAX_RISK - 1 ? "\n\n" : "\n"));
#else
fprintf(stderr, "%d%s", risk, (risk == NDPI_MAX_RISK - 1 ? "\n" : ","));
#endif
}
}
static int parse_options(int argc, char ** argv)
{
int opt;
while ((opt = getopt(argc, argv, "hdp:s:r:u:g:D:GUR:ME")) != -1)
{
switch (opt)
{
@@ -482,6 +670,7 @@ static int parse_options(int argc, char ** argv)
if (perror_ull(str_value_to_ull(optarg, &pcap_filename_rotation), "pcap_filename_rotation") !=
CONVERSION_OK)
{
fprintf(stderr, "%s: Argument for `-r' is not a number: %s\n", argv[0], optarg);
return 1;
}
break;
@@ -504,13 +693,37 @@ static int parse_options(int argc, char ** argv)
process_undetected = 1;
break;
case 'R':
process_risky = 1;
{
char * value = (optarg[0] == '~' ? optarg + 1 : optarg);
nDPIsrvd_ull risk;
if (perror_ull(str_value_to_ull(value, &risk), "process_risky") != CONVERSION_OK)
{
fprintf(stderr, "%s: Argument for `-R' is not a number: %s\n", argv[0], optarg);
return 1;
}
if (risk >= NDPI_MAX_RISK)
{
fprintf(stderr, "%s: Invalid risk set: %s\n", argv[0], optarg);
return 1;
}
if (optarg[0] == '~')
{
unset_ndpi_risk(&process_risky, risk);
}
else
{
set_ndpi_risk(&process_risky, risk);
}
break;
}
case 'M':
process_midstream = 1;
break;
case 'E':
ignore_empty_flows = 1;
break;
default:
fprintf(stderr, usage, argv[0]);
print_usage(argv[0]);
return 1;
}
}
@@ -540,7 +753,7 @@ static int parse_options(int argc, char ** argv)
if (optind < argc)
{
fprintf(stderr, "Unexpected argument after options\n\n");
fprintf(stderr, usage, argv[0]);
print_usage(argv[0]);
return 1;
}
@@ -564,30 +777,48 @@ static int parse_options(int argc, char ** argv)
static int mainloop(void)
{
enum nDPIsrvd_read_return read_ret = READ_OK;
while (main_thread_shutdown == 0)
{
errno = 0;
enum nDPIsrvd_read_return read_ret = nDPIsrvd_read(sock);
read_ret = nDPIsrvd_read(sock);
if (errno == EINTR)
{
continue;
}
if (read_ret == READ_TIMEOUT)
{
syslog(LOG_DAEMON,
"No data received during the last %llu second(s).\n",
(long long unsigned int)sock->read_timeout.tv_sec);
continue;
}
if (read_ret != READ_OK)
{
syslog(LOG_DAEMON | LOG_ERR, "nDPIsrvd read failed with: %s", nDPIsrvd_enum_to_string(read_ret));
return 1;
syslog(LOG_DAEMON | LOG_ERR, "Could not read from socket: %s", nDPIsrvd_enum_to_string(read_ret));
break;
}
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(sock);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
syslog(LOG_DAEMON | LOG_ERR, "nDPIsrvd parse failed with: %s", nDPIsrvd_enum_to_string(parse_ret));
return 1;
syslog(LOG_DAEMON | LOG_ERR, "Could not parse json string: %s", nDPIsrvd_enum_to_string(parse_ret));
break;
}
}
if (main_thread_shutdown == 0 && read_ret != READ_OK)
{
return 1;
}
return 0;
}
int main(int argc, char ** argv)
{
sock = nDPIsrvd_init(0, sizeof(struct flow_user_data), captured_json_callback, captured_flow_end_callback);
sock = nDPIsrvd_socket_init(
0, 0, 0, sizeof(struct flow_user_data), captured_json_callback, NULL, captured_flow_cleanup_callback);
if (sock == NULL)
{
fprintf(stderr, "%s: nDPIsrvd socket memory allocation failed!\n", argv[0]);
@@ -602,14 +833,14 @@ int main(int argc, char ** argv)
printf("Recv buffer size: %u\n", NETWORK_BUFFER_MAX_SIZE);
printf("Connecting to `%s'..\n", serv_optarg);
enum nDPIsrvd_connect_return connect_ret = nDPIsrvd_connect(sock);
if (connect_ret != CONNECT_OK)
if (nDPIsrvd_connect(sock) != CONNECT_OK)
{
fprintf(stderr, "%s: nDPIsrvd socket connect to %s failed!\n", argv[0], serv_optarg);
nDPIsrvd_free(&sock);
nDPIsrvd_socket_free(&sock);
return 1;
}
signal(SIGUSR1, sighandler);
signal(SIGINT, sighandler);
signal(SIGTERM, sighandler);
signal(SIGPIPE, sighandler);
@@ -637,7 +868,7 @@ int main(int argc, char ** argv)
int retval = mainloop();
nDPIsrvd_free(&sock);
nDPIsrvd_socket_free(&sock);
daemonize_shutdown(pidfile);
closelog();

View File

@@ -0,0 +1,14 @@
HowTo use this
==============
This HowTo assumes that the examples were sucessfully compiled and installed within the prefix `/usr` on your target machine.
1. Make sure nDPId and Collectd is running.
2. Edit `collectd.conf` usually in `/etc`.
3. Add the lines in `plugin_nDPIsrvd.conf` to your `collectd.conf`.
You may adapt this file depending what command line arguments you'd supplied to `nDPId`.
4. Reload your Collectd instance.
5. Optional: Install a http server of your choice.
Place the files in `/usr/share/nDPId/nDPIsrvd-collectd/www` somewhere in your www root.
6. Optional: Add `rrdgraph.sh` as cron job e.g. `0 * * * * /usr/share/nDPId/nDPIsrvd-collectd/rrdgraph.sh [path-to-the-collectd-rrd-directory] [path-to-your-dpi-wwwroot]`.
This will run `rrdgraph.sh` once per hour. You can adjust this until it fit your needs.

View File

@@ -1,3 +1,4 @@
#include <arpa/inet.h>
#include <errno.h>
#include <signal.h>
#include <stdio.h>
@@ -8,8 +9,14 @@
#include <sys/timerfd.h>
#include <unistd.h>
#include <ndpi_typedefs.h>
#include "nDPIsrvd.h"
#define DEFAULT_COLLECTD_EXEC_INST "nDPIsrvd"
#define ERROR_EVENT_ID_MAX 17
//#define GENERATE_TIMESTAMP 1
#define LOG(flags, format, ...) \
if (quiet == 0) \
{ \
@@ -21,28 +28,53 @@
syslog(flags, format, __VA_ARGS__); \
}
static struct nDPIsrvd_socket * sock = NULL;
struct flow_user_data
{
nDPIsrvd_ull last_flow_src_l4_payload_len;
nDPIsrvd_ull last_flow_dst_l4_payload_len;
nDPIsrvd_ull detected_risks;
};
static int main_thread_shutdown = 0;
static int collectd_timerfd = -1;
static pid_t collectd_pid;
static char * serv_optarg = NULL;
static char * collectd_hostname = NULL;
static char * collectd_interval = NULL;
static char * instance_name = NULL;
static nDPIsrvd_ull collectd_interval_ull = 0uL;
static int quiet = 0;
static struct
{
uint64_t json_lines;
uint64_t json_bytes;
uint64_t flow_new_count;
uint64_t flow_end_count;
uint64_t flow_idle_count;
uint64_t flow_update_count;
uint64_t flow_analyse_count;
uint64_t flow_guessed_count;
uint64_t flow_detected_count;
uint64_t flow_detection_update_count;
uint64_t flow_not_detected_count;
uint64_t flow_packet_count;
uint64_t flow_total_bytes;
uint64_t packet_count;
uint64_t packet_flow_count;
uint64_t init_count;
uint64_t reconnect_count;
uint64_t shutdown_count;
uint64_t status_count;
uint64_t error_count_sum;
uint64_t error_count[ERROR_EVENT_ID_MAX];
uint64_t error_unknown_count;
uint64_t flow_src_total_bytes;
uint64_t flow_dst_total_bytes;
uint64_t flow_risky_count;
uint64_t flow_breed_safe_count;
@@ -50,6 +82,7 @@ static struct
uint64_t flow_breed_fun_count;
uint64_t flow_breed_unsafe_count;
uint64_t flow_breed_potentially_dangerous_count;
uint64_t flow_breed_tracker_ads_count;
uint64_t flow_breed_dangerous_count;
uint64_t flow_breed_unrated_count;
uint64_t flow_breed_unknown_count;
@@ -81,18 +114,110 @@ static struct
uint64_t flow_category_mining_count;
uint64_t flow_category_malware_count;
uint64_t flow_category_advertisment_count;
uint64_t flow_category_other_count;
uint64_t flow_category_unknown_count;
uint64_t flow_l3_ip4_count;
uint64_t flow_l3_ip6_count;
uint64_t flow_l3_other_count;
uint64_t flow_l4_tcp_count;
uint64_t flow_l4_udp_count;
uint64_t flow_l4_icmp_count;
uint64_t flow_l4_other_count;
nDPIsrvd_ull flow_risk_count[NDPI_MAX_RISK - 1];
nDPIsrvd_ull flow_risk_unknown_count;
} collectd_statistics = {};
struct json_stat_map
{
char const * const json_key;
uint64_t * const collectd_stat;
};
static struct json_stat_map const flow_event_map[] = {{"new", &collectd_statistics.flow_new_count},
{"end", &collectd_statistics.flow_end_count},
{"idle", &collectd_statistics.flow_idle_count},
{"update", &collectd_statistics.flow_update_count},
{"analyse", &collectd_statistics.flow_analyse_count},
{"guessed", &collectd_statistics.flow_guessed_count},
{"detected", &collectd_statistics.flow_detected_count},
{"detection-update",
&collectd_statistics.flow_detection_update_count},
{"not-detected", &collectd_statistics.flow_not_detected_count}};
static struct json_stat_map const packet_event_map[] = {{"packet", &collectd_statistics.packet_count},
{"packet-flow", &collectd_statistics.packet_flow_count}};
static struct json_stat_map const daemon_event_map[] = {{"init", &collectd_statistics.init_count},
{"reconnect", &collectd_statistics.reconnect_count},
{"shutdown", &collectd_statistics.shutdown_count},
{"status", &collectd_statistics.status_count}};
static struct json_stat_map const breeds_map[] = {{"Safe", &collectd_statistics.flow_breed_safe_count},
{"Acceptable", &collectd_statistics.flow_breed_acceptable_count},
{"Fun", &collectd_statistics.flow_breed_fun_count},
{"Unsafe", &collectd_statistics.flow_breed_unsafe_count},
{"Potentially Dangerous",
&collectd_statistics.flow_breed_potentially_dangerous_count},
{"Tracker/Ads", &collectd_statistics.flow_breed_tracker_ads_count},
{"Dangerous", &collectd_statistics.flow_breed_dangerous_count},
{"Unrated", &collectd_statistics.flow_breed_unrated_count},
{NULL, &collectd_statistics.flow_breed_unknown_count}};
static struct json_stat_map const categories_map[] = {
{"Media", &collectd_statistics.flow_category_media_count},
{"VPN", &collectd_statistics.flow_category_vpn_count},
{"Email", &collectd_statistics.flow_category_email_count},
{"DataTransfer", &collectd_statistics.flow_category_data_transfer_count},
{"Web", &collectd_statistics.flow_category_web_count},
{"SocialNetwork", &collectd_statistics.flow_category_social_network_count},
{"Download-FileTransfer-FileSharing", &collectd_statistics.flow_category_download_count},
{"Game", &collectd_statistics.flow_category_game_count},
{"Chat", &collectd_statistics.flow_category_chat_count},
{"VoIP", &collectd_statistics.flow_category_voip_count},
{"Database", &collectd_statistics.flow_category_database_count},
{"RemoteAccess", &collectd_statistics.flow_category_remote_access_count},
{"Cloud", &collectd_statistics.flow_category_cloud_count},
{"Network", &collectd_statistics.flow_category_network_count},
{"Collaborative", &collectd_statistics.flow_category_collaborative_count},
{"RPC", &collectd_statistics.flow_category_rpc_count},
{"Streaming", &collectd_statistics.flow_category_streaming_count},
{"System", &collectd_statistics.flow_category_system_count},
{"SoftwareUpdate", &collectd_statistics.flow_category_software_update_count},
{"Music", &collectd_statistics.flow_category_music_count},
{"Video", &collectd_statistics.flow_category_video_count},
{"Shopping", &collectd_statistics.flow_category_shopping_count},
{"Productivity", &collectd_statistics.flow_category_productivity_count},
{"FileSharing", &collectd_statistics.flow_category_file_sharing_count},
{"Mining", &collectd_statistics.flow_category_mining_count},
{"Malware", &collectd_statistics.flow_category_malware_count},
{"Advertisement", &collectd_statistics.flow_category_advertisment_count},
{NULL, &collectd_statistics.flow_category_unknown_count}};
#ifdef ENABLE_MEMORY_PROFILING
void nDPIsrvd_memprof_log_alloc(size_t alloc_size)
{
(void)alloc_size;
}
void nDPIsrvd_memprof_log_free(size_t free_size)
{
(void)free_size;
}
void nDPIsrvd_memprof_log(char const * const format, ...)
{
va_list ap;
va_start(ap, format);
fprintf(stderr, "%s", "nDPIsrvd MemoryProfiler: ");
vfprintf(stderr, format, ap);
fprintf(stderr, "%s\n", "");
va_end(ap);
}
#endif
static int set_collectd_timer(void)
{
const time_t interval = collectd_interval_ull * 1000;
@@ -128,22 +253,25 @@ static void sighandler(int signum)
}
}
static int parse_options(int argc, char ** argv)
static int parse_options(int argc, char ** argv, struct nDPIsrvd_socket * const sock)
{
int opt;
static char const usage[] =
"Usage: %s "
"[-s host] [-c hostname] [-i interval] [-q]\n\n"
"[-s host] [-c hostname] [-n collectd-instance-name] [-i interval] [-q]\n\n"
"\t-s\tDestination where nDPIsrvd is listening on.\n"
"\t-c\tCollectd hostname.\n"
"\t \tThis value defaults to the environment variable COLLECTD_HOSTNAME.\n"
"\t-n\tName of the collectd(-exec) instance.\n"
"\t \tDefaults to: " DEFAULT_COLLECTD_EXEC_INST
"\n"
"\t-i\tInterval between print statistics to stdout.\n"
"\t \tThis value defaults to the environment variable COLLECTD_INTERVAL.\n"
"\t-q\tDo not print anything except collectd statistics.\n"
"\t \tAutomatically enabled if environment variables mentioned above are set.\n";
while ((opt = getopt(argc, argv, "hs:c:i:q")) != -1)
while ((opt = getopt(argc, argv, "hs:c:n:i:q")) != -1)
{
switch (opt)
{
@@ -155,6 +283,10 @@ static int parse_options(int argc, char ** argv)
free(collectd_hostname);
collectd_hostname = strdup(optarg);
break;
case 'n':
free(instance_name);
instance_name = strdup(optarg);
break;
case 'i':
free(collectd_interval);
collectd_interval = strdup(optarg);
@@ -182,6 +314,11 @@ static int parse_options(int argc, char ** argv)
}
}
if (instance_name == NULL)
{
instance_name = strdup(DEFAULT_COLLECTD_EXEC_INST);
}
if (collectd_interval == NULL)
{
collectd_interval = getenv("COLLECTD_INTERVAL");
@@ -217,105 +354,117 @@ static int parse_options(int argc, char ** argv)
return 0;
}
#define COLLECTD_PUTVAL_N_FORMAT(name) "PUTVAL %s/nDPId/" #name " interval=%llu %llu:%llu\n"
#ifdef GENERATE_TIMESTAMP
#define COLLECTD_PUTVAL_PREFIX "PUTVAL \"%s/exec-%s/gauge-"
#define COLLECTD_PUTVAL_SUFFIX "\" interval=%llu %llu:%llu\n"
#define COLLECTD_PUTVAL_N(value) \
collectd_hostname, collectd_interval_ull, (unsigned long long int)now, \
collectd_hostname, instance_name, #value, collectd_interval_ull, (unsigned long long int)now, \
(unsigned long long int)collectd_statistics.value
#define COLLECTD_PUTVAL_N2(name, value) \
collectd_hostname, instance_name, name, collectd_interval_ull, (unsigned long long int)now, \
(unsigned long long int)collectd_statistics.value
#else
#define COLLECTD_PUTVAL_PREFIX "PUTVAL \"%s/exec-%s/gauge-"
#define COLLECTD_PUTVAL_SUFFIX "\" interval=%llu N:%llu\n"
#define COLLECTD_PUTVAL_N(value) \
collectd_hostname, instance_name, #value, collectd_interval_ull, (unsigned long long int)collectd_statistics.value
#define COLLECTD_PUTVAL_N2(name, value) \
collectd_hostname, instance_name, name, collectd_interval_ull, (unsigned long long int)collectd_statistics.value
#endif
#define COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_PREFIX "%s" COLLECTD_PUTVAL_SUFFIX
static void print_collectd_exec_output(void)
{
size_t i;
#ifdef GENERATE_TIMESTAMP
time_t now = time(NULL);
#endif
printf(COLLECTD_PUTVAL_N_FORMAT(flow_new_count) COLLECTD_PUTVAL_N_FORMAT(flow_end_count)
COLLECTD_PUTVAL_N_FORMAT(flow_idle_count) COLLECTD_PUTVAL_N_FORMAT(flow_guessed_count)
COLLECTD_PUTVAL_N_FORMAT(flow_detected_count) COLLECTD_PUTVAL_N_FORMAT(flow_detection_update_count)
COLLECTD_PUTVAL_N_FORMAT(flow_not_detected_count) COLLECTD_PUTVAL_N_FORMAT(flow_packet_count)
COLLECTD_PUTVAL_N_FORMAT(flow_total_bytes) COLLECTD_PUTVAL_N_FORMAT(flow_risky_count),
printf(COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT(),
COLLECTD_PUTVAL_N(json_lines),
COLLECTD_PUTVAL_N(json_bytes),
COLLECTD_PUTVAL_N(flow_new_count),
COLLECTD_PUTVAL_N(flow_end_count),
COLLECTD_PUTVAL_N(flow_idle_count),
COLLECTD_PUTVAL_N(flow_update_count),
COLLECTD_PUTVAL_N(flow_analyse_count),
COLLECTD_PUTVAL_N(flow_guessed_count),
COLLECTD_PUTVAL_N(flow_detected_count),
COLLECTD_PUTVAL_N(flow_detection_update_count),
COLLECTD_PUTVAL_N(flow_not_detected_count),
COLLECTD_PUTVAL_N(flow_packet_count),
COLLECTD_PUTVAL_N(flow_total_bytes),
COLLECTD_PUTVAL_N(flow_risky_count));
COLLECTD_PUTVAL_N(flow_src_total_bytes),
COLLECTD_PUTVAL_N(flow_dst_total_bytes),
COLLECTD_PUTVAL_N(flow_risky_count),
COLLECTD_PUTVAL_N(packet_count),
COLLECTD_PUTVAL_N(packet_flow_count),
COLLECTD_PUTVAL_N(init_count),
COLLECTD_PUTVAL_N(reconnect_count),
COLLECTD_PUTVAL_N(shutdown_count),
COLLECTD_PUTVAL_N(status_count));
printf(COLLECTD_PUTVAL_N_FORMAT(flow_breed_safe_count) COLLECTD_PUTVAL_N_FORMAT(flow_breed_acceptable_count)
COLLECTD_PUTVAL_N_FORMAT(flow_breed_fun_count) COLLECTD_PUTVAL_N_FORMAT(flow_breed_unsafe_count)
COLLECTD_PUTVAL_N_FORMAT(flow_breed_potentially_dangerous_count)
COLLECTD_PUTVAL_N_FORMAT(flow_breed_dangerous_count)
COLLECTD_PUTVAL_N_FORMAT(flow_breed_unrated_count)
COLLECTD_PUTVAL_N_FORMAT(flow_breed_unknown_count),
printf(COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT(),
COLLECTD_PUTVAL_N(flow_breed_safe_count),
COLLECTD_PUTVAL_N(flow_breed_acceptable_count),
COLLECTD_PUTVAL_N(flow_breed_fun_count),
COLLECTD_PUTVAL_N(flow_breed_unsafe_count),
COLLECTD_PUTVAL_N(flow_breed_potentially_dangerous_count),
COLLECTD_PUTVAL_N(flow_breed_tracker_ads_count),
COLLECTD_PUTVAL_N(flow_breed_dangerous_count),
COLLECTD_PUTVAL_N(flow_breed_unrated_count),
COLLECTD_PUTVAL_N(flow_breed_unknown_count));
printf(
COLLECTD_PUTVAL_N_FORMAT(flow_category_media_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_vpn_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_email_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_data_transfer_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_web_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_social_network_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_download_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_game_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_chat_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_voip_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_database_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_remote_access_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_cloud_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_network_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_collaborative_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_rpc_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_streaming_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_system_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_software_update_count) COLLECTD_PUTVAL_N_FORMAT(
flow_category_music_count) COLLECTD_PUTVAL_N_FORMAT(flow_category_video_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_shopping_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_productivity_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_file_sharing_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_mining_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_malware_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_advertisment_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_other_count)
COLLECTD_PUTVAL_N_FORMAT(flow_category_unknown_count),
printf(COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT(),
COLLECTD_PUTVAL_N(flow_category_media_count),
COLLECTD_PUTVAL_N(flow_category_vpn_count),
COLLECTD_PUTVAL_N(flow_category_email_count),
COLLECTD_PUTVAL_N(flow_category_data_transfer_count),
COLLECTD_PUTVAL_N(flow_category_web_count),
COLLECTD_PUTVAL_N(flow_category_social_network_count),
COLLECTD_PUTVAL_N(flow_category_download_count),
COLLECTD_PUTVAL_N(flow_category_game_count),
COLLECTD_PUTVAL_N(flow_category_chat_count),
COLLECTD_PUTVAL_N(flow_category_voip_count),
COLLECTD_PUTVAL_N(flow_category_database_count),
COLLECTD_PUTVAL_N(flow_category_remote_access_count),
COLLECTD_PUTVAL_N(flow_category_cloud_count),
COLLECTD_PUTVAL_N(flow_category_network_count),
COLLECTD_PUTVAL_N(flow_category_collaborative_count),
COLLECTD_PUTVAL_N(flow_category_rpc_count),
COLLECTD_PUTVAL_N(flow_category_streaming_count),
COLLECTD_PUTVAL_N(flow_category_system_count),
COLLECTD_PUTVAL_N(flow_category_software_update_count),
COLLECTD_PUTVAL_N(flow_category_music_count),
COLLECTD_PUTVAL_N(flow_category_video_count),
COLLECTD_PUTVAL_N(flow_category_shopping_count),
COLLECTD_PUTVAL_N(flow_category_productivity_count),
COLLECTD_PUTVAL_N(flow_category_file_sharing_count),
COLLECTD_PUTVAL_N(flow_category_mining_count),
COLLECTD_PUTVAL_N(flow_category_malware_count),
COLLECTD_PUTVAL_N(flow_category_advertisment_count),
COLLECTD_PUTVAL_N(flow_category_other_count),
COLLECTD_PUTVAL_N(flow_category_unknown_count));
COLLECTD_PUTVAL_N(flow_category_media_count),
COLLECTD_PUTVAL_N(flow_category_vpn_count),
COLLECTD_PUTVAL_N(flow_category_email_count),
COLLECTD_PUTVAL_N(flow_category_data_transfer_count),
COLLECTD_PUTVAL_N(flow_category_web_count),
COLLECTD_PUTVAL_N(flow_category_social_network_count),
COLLECTD_PUTVAL_N(flow_category_download_count),
COLLECTD_PUTVAL_N(flow_category_game_count),
COLLECTD_PUTVAL_N(flow_category_chat_count),
COLLECTD_PUTVAL_N(flow_category_voip_count),
COLLECTD_PUTVAL_N(flow_category_database_count),
COLLECTD_PUTVAL_N(flow_category_remote_access_count),
COLLECTD_PUTVAL_N(flow_category_cloud_count),
COLLECTD_PUTVAL_N(flow_category_network_count),
COLLECTD_PUTVAL_N(flow_category_collaborative_count),
COLLECTD_PUTVAL_N(flow_category_rpc_count),
COLLECTD_PUTVAL_N(flow_category_streaming_count),
COLLECTD_PUTVAL_N(flow_category_system_count),
COLLECTD_PUTVAL_N(flow_category_software_update_count),
COLLECTD_PUTVAL_N(flow_category_music_count),
COLLECTD_PUTVAL_N(flow_category_video_count),
COLLECTD_PUTVAL_N(flow_category_shopping_count),
COLLECTD_PUTVAL_N(flow_category_productivity_count),
COLLECTD_PUTVAL_N(flow_category_file_sharing_count),
COLLECTD_PUTVAL_N(flow_category_mining_count),
COLLECTD_PUTVAL_N(flow_category_malware_count),
COLLECTD_PUTVAL_N(flow_category_advertisment_count),
COLLECTD_PUTVAL_N(flow_category_unknown_count));
printf(COLLECTD_PUTVAL_N_FORMAT(flow_l3_ip4_count) COLLECTD_PUTVAL_N_FORMAT(flow_l3_ip6_count)
COLLECTD_PUTVAL_N_FORMAT(flow_l3_other_count) COLLECTD_PUTVAL_N_FORMAT(flow_l4_tcp_count)
COLLECTD_PUTVAL_N_FORMAT(flow_l4_udp_count) COLLECTD_PUTVAL_N_FORMAT(flow_l4_icmp_count)
COLLECTD_PUTVAL_N_FORMAT(flow_l4_other_count),
printf(COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT()
COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT() COLLECTD_PUTVAL_N_FORMAT(),
COLLECTD_PUTVAL_N(flow_l3_ip4_count),
COLLECTD_PUTVAL_N(flow_l3_ip6_count),
@@ -323,12 +472,29 @@ static void print_collectd_exec_output(void)
COLLECTD_PUTVAL_N(flow_l4_tcp_count),
COLLECTD_PUTVAL_N(flow_l4_udp_count),
COLLECTD_PUTVAL_N(flow_l4_icmp_count),
COLLECTD_PUTVAL_N(flow_l4_other_count));
COLLECTD_PUTVAL_N(flow_l4_other_count),
COLLECTD_PUTVAL_N(flow_risk_unknown_count),
COLLECTD_PUTVAL_N(error_unknown_count),
COLLECTD_PUTVAL_N(error_count_sum));
for (i = 0; i < ERROR_EVENT_ID_MAX; ++i)
{
char gauge_name[BUFSIZ];
snprintf(gauge_name, sizeof(gauge_name), "error_%zu_count", i);
printf(COLLECTD_PUTVAL_N_FORMAT(), COLLECTD_PUTVAL_N2(gauge_name, error_count[i]));
}
for (i = 0; i < NDPI_MAX_RISK - 1; ++i)
{
char gauge_name[BUFSIZ];
snprintf(gauge_name, sizeof(gauge_name), "flow_risk_%zu_count", i + 1);
printf(COLLECTD_PUTVAL_N_FORMAT(), COLLECTD_PUTVAL_N2(gauge_name, flow_risk_count[i]));
}
memset(&collectd_statistics, 0, sizeof(collectd_statistics));
}
static int mainloop(int epollfd)
static int mainloop(int epollfd, struct nDPIsrvd_socket * const sock)
{
struct epoll_event events[32];
size_t const events_size = sizeof(events) / sizeof(events[0]);
@@ -349,6 +515,16 @@ static int mainloop(int epollfd)
{
uint64_t expirations;
/*
* Check if collectd parent process is still running.
* May happen if collectd was killed with singals e.g. SIGKILL.
*/
if (getppid() != collectd_pid)
{
LOG(LOG_DAEMON | LOG_ERR, "Parent process %d exited. Nothing left to do here, bye.", collectd_pid);
return 1;
}
errno = 0;
if (read(collectd_timerfd, &expirations, sizeof(expirations)) != sizeof(expirations))
{
@@ -386,38 +562,118 @@ static int mainloop(int epollfd)
return 0;
}
static uint64_t get_total_flow_bytes(struct nDPIsrvd_socket * const sock)
static void collectd_map_token_to_stat(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_json_token const * const token,
struct json_stat_map const * const map,
size_t map_length)
{
nDPIsrvd_ull total_bytes_ull = 0;
size_t i, null_i = map_length;
if (TOKEN_VALUE_TO_ULL(TOKEN_GET_SZ(sock, "flow_tot_l4_data_len"), &total_bytes_ull) == CONVERSION_OK)
if (token == NULL)
{
return total_bytes_ull;
return;
}
else
for (i = 0; i < map_length; ++i)
{
return 0;
if (map[i].json_key == NULL)
{
null_i = i;
continue;
}
if (TOKEN_VALUE_EQUALS(sock, token, map[i].json_key, strlen(map[i].json_key)) != 0)
{
(*map[i].collectd_stat)++;
return;
}
}
if (null_i < map_length)
{
(*map[null_i].collectd_stat)++;
}
}
static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_socket * const sock,
static enum nDPIsrvd_callback_return collectd_json_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow)
{
(void)sock;
(void)flow;
(void)instance;
(void)thread_data;
struct nDPIsrvd_json_token const * const flow_event_name = TOKEN_GET_SZ(sock, "flow_event_name");
struct flow_user_data * flow_user_data = NULL;
if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "new") != 0)
collectd_statistics.json_lines++;
collectd_statistics.json_bytes += sock->buffer.json_string_length + NETWORK_BUFFER_LENGTH_DIGITS;
struct nDPIsrvd_json_token const * const packet_event_name = TOKEN_GET_SZ(sock, "packet_event_name");
if (packet_event_name != NULL)
{
collectd_statistics.flow_new_count++;
collectd_map_token_to_stat(sock, packet_event_name, packet_event_map, nDPIsrvd_ARRAY_LENGTH(packet_event_map));
}
struct nDPIsrvd_json_token const * const daemon_event_name = TOKEN_GET_SZ(sock, "daemon_event_name");
if (daemon_event_name != NULL)
{
collectd_map_token_to_stat(sock, daemon_event_name, daemon_event_map, nDPIsrvd_ARRAY_LENGTH(daemon_event_map));
}
struct nDPIsrvd_json_token const * const error_event_id = TOKEN_GET_SZ(sock, "error_event_id");
if (error_event_id != NULL)
{
nDPIsrvd_ull error_event_id_ull;
if (TOKEN_VALUE_TO_ULL(sock, error_event_id, &error_event_id_ull) != CONVERSION_OK)
{
return CALLBACK_ERROR;
}
collectd_statistics.error_count_sum++;
if (error_event_id_ull < ERROR_EVENT_ID_MAX)
{
collectd_statistics.error_count[error_event_id_ull]++;
}
else
{
collectd_statistics.error_unknown_count++;
}
}
if (flow != NULL)
{
flow_user_data = (struct flow_user_data *)flow->flow_user_data;
}
if (flow_user_data != NULL)
{
nDPIsrvd_ull total_bytes_ull[2] = {0, 0};
if (TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_src_tot_l4_payload_len"), &total_bytes_ull[0]) ==
CONVERSION_OK &&
TOKEN_VALUE_TO_ULL(sock, TOKEN_GET_SZ(sock, "flow_dst_tot_l4_payload_len"), &total_bytes_ull[1]) ==
CONVERSION_OK)
{
collectd_statistics.flow_src_total_bytes +=
total_bytes_ull[0] - flow_user_data->last_flow_src_l4_payload_len;
collectd_statistics.flow_dst_total_bytes +=
total_bytes_ull[1] - flow_user_data->last_flow_dst_l4_payload_len;
flow_user_data->last_flow_src_l4_payload_len = total_bytes_ull[0];
flow_user_data->last_flow_dst_l4_payload_len = total_bytes_ull[1];
}
}
collectd_map_token_to_stat(sock, flow_event_name, flow_event_map, nDPIsrvd_ARRAY_LENGTH(flow_event_map));
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "new") != 0)
{
struct nDPIsrvd_json_token const * const l3_proto = TOKEN_GET_SZ(sock, "l3_proto");
if (TOKEN_VALUE_EQUALS_SZ(l3_proto, "ip4") != 0)
if (TOKEN_VALUE_EQUALS_SZ(sock, l3_proto, "ip4") != 0)
{
collectd_statistics.flow_l3_ip4_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(l3_proto, "ip6") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, l3_proto, "ip6") != 0)
{
collectd_statistics.flow_l3_ip6_count++;
}
@@ -427,15 +683,15 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
}
struct nDPIsrvd_json_token const * const l4_proto = TOKEN_GET_SZ(sock, "l4_proto");
if (TOKEN_VALUE_EQUALS_SZ(l4_proto, "tcp") != 0)
if (TOKEN_VALUE_EQUALS_SZ(sock, l4_proto, "tcp") != 0)
{
collectd_statistics.flow_l4_tcp_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(l4_proto, "udp") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, l4_proto, "udp") != 0)
{
collectd_statistics.flow_l4_udp_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(l4_proto, "icmp") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, l4_proto, "icmp") != 0)
{
collectd_statistics.flow_l4_icmp_count++;
}
@@ -444,193 +700,49 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
collectd_statistics.flow_l4_other_count++;
}
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "end") != 0)
else if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detected") != 0 ||
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "detection-update") != 0 ||
TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "update") != 0)
{
collectd_statistics.flow_end_count++;
collectd_statistics.flow_total_bytes += get_total_flow_bytes(sock);
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "idle") != 0)
{
collectd_statistics.flow_idle_count++;
collectd_statistics.flow_total_bytes += get_total_flow_bytes(sock);
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "guessed") != 0)
{
collectd_statistics.flow_guessed_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "detected") != 0)
{
collectd_statistics.flow_detected_count++;
struct nDPIsrvd_json_token const * const flow_risk = TOKEN_GET_SZ(sock, "ndpi", "flow_risk");
struct nDPIsrvd_json_token const * current = NULL;
int next_child_index = -1;
if (TOKEN_GET_SZ(sock, "flow_risk") != NULL)
if (flow_risk != NULL)
{
collectd_statistics.flow_risky_count++;
if (flow_user_data->detected_risks == 0)
{
collectd_statistics.flow_risky_count++;
}
while ((current = nDPIsrvd_get_next_token(sock, flow_risk, &next_child_index)) != NULL)
{
nDPIsrvd_ull numeric_risk_value = (nDPIsrvd_ull)-1;
if (str_value_to_ull(TOKEN_GET_KEY(sock, current, NULL), &numeric_risk_value) == CONVERSION_OK)
{
if ((flow_user_data->detected_risks & (1 << numeric_risk_value)) == 0)
{
if (numeric_risk_value < NDPI_MAX_RISK && numeric_risk_value > 0)
{
collectd_statistics.flow_risk_count[numeric_risk_value - 1]++;
}
else
{
collectd_statistics.flow_risk_unknown_count++;
}
flow_user_data->detected_risks |= (1 << (numeric_risk_value - 1));
}
}
}
}
struct nDPIsrvd_json_token const * const breed = TOKEN_GET_SZ(sock, "breed");
if (TOKEN_VALUE_EQUALS_SZ(breed, "Safe") != 0)
{
collectd_statistics.flow_breed_safe_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Acceptable") != 0)
{
collectd_statistics.flow_breed_acceptable_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Fun") != 0)
{
collectd_statistics.flow_breed_fun_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Unsafe") != 0)
{
collectd_statistics.flow_breed_unsafe_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Potentially Dangerous") != 0)
{
collectd_statistics.flow_breed_potentially_dangerous_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Dangerous") != 0)
{
collectd_statistics.flow_breed_dangerous_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(breed, "Unrated") != 0)
{
collectd_statistics.flow_breed_unrated_count++;
}
else
{
collectd_statistics.flow_breed_unknown_count++;
}
struct nDPIsrvd_json_token const * const breed = TOKEN_GET_SZ(sock, "ndpi", "breed");
collectd_map_token_to_stat(sock, breed, breeds_map, nDPIsrvd_ARRAY_LENGTH(breeds_map));
struct nDPIsrvd_json_token const * const category = TOKEN_GET_SZ(sock, "category");
if (TOKEN_VALUE_EQUALS_SZ(category, "Media") != 0)
{
collectd_statistics.flow_category_media_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "VPN") != 0)
{
collectd_statistics.flow_category_vpn_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Email") != 0)
{
collectd_statistics.flow_category_email_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "DataTransfer") != 0)
{
collectd_statistics.flow_category_data_transfer_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Web") != 0)
{
collectd_statistics.flow_category_web_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "SocialNetwork") != 0)
{
collectd_statistics.flow_category_social_network_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Download-FileTransfer-FileSharing") != 0)
{
collectd_statistics.flow_category_download_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Game") != 0)
{
collectd_statistics.flow_category_game_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Chat") != 0)
{
collectd_statistics.flow_category_chat_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "VoIP") != 0)
{
collectd_statistics.flow_category_voip_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Database") != 0)
{
collectd_statistics.flow_category_database_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "RemoteAccess") != 0)
{
collectd_statistics.flow_category_remote_access_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Cloud") != 0)
{
collectd_statistics.flow_category_cloud_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Network") != 0)
{
collectd_statistics.flow_category_network_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Collaborative") != 0)
{
collectd_statistics.flow_category_collaborative_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "RPC") != 0)
{
collectd_statistics.flow_category_rpc_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Streaming") != 0)
{
collectd_statistics.flow_category_streaming_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "System") != 0)
{
collectd_statistics.flow_category_system_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "SoftwareUpdate") != 0)
{
collectd_statistics.flow_category_software_update_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Music") != 0)
{
collectd_statistics.flow_category_music_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Video") != 0)
{
collectd_statistics.flow_category_video_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Shopping") != 0)
{
collectd_statistics.flow_category_shopping_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Productivity") != 0)
{
collectd_statistics.flow_category_productivity_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "FileSharing") != 0)
{
collectd_statistics.flow_category_file_sharing_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Mining") != 0)
{
collectd_statistics.flow_category_mining_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Malware") != 0)
{
collectd_statistics.flow_category_malware_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(category, "Advertisement") != 0)
{
collectd_statistics.flow_category_advertisment_count++;
}
else if (category != NULL)
{
collectd_statistics.flow_category_other_count++;
}
else
{
collectd_statistics.flow_category_unknown_count++;
}
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "detection-update") != 0)
{
collectd_statistics.flow_detection_update_count++;
}
else if (TOKEN_VALUE_EQUALS_SZ(flow_event_name, "not-detected") != 0)
{
collectd_statistics.flow_not_detected_count++;
}
if (TOKEN_GET_SZ(sock, "packet_event_name") != NULL)
{
collectd_statistics.flow_packet_count++;
struct nDPIsrvd_json_token const * const category = TOKEN_GET_SZ(sock, "ndpi", "category");
collectd_map_token_to_stat(sock, category, categories_map, nDPIsrvd_ARRAY_LENGTH(categories_map));
}
return CALLBACK_OK;
@@ -638,20 +750,22 @@ static enum nDPIsrvd_callback_return captured_json_callback(struct nDPIsrvd_sock
int main(int argc, char ** argv)
{
int retval = 1;
enum nDPIsrvd_connect_return connect_ret;
int retval = 1, epollfd = -1;
openlog("nDPIsrvd-collectd", LOG_CONS, LOG_DAEMON);
sock = nDPIsrvd_init(0, 0, captured_json_callback, NULL);
struct nDPIsrvd_socket * sock =
nDPIsrvd_socket_init(0, 0, 0, sizeof(struct flow_user_data), collectd_json_callback, NULL, NULL);
if (sock == NULL)
{
LOG(LOG_DAEMON | LOG_ERR, "%s", "nDPIsrvd socket memory allocation failed!");
return 1;
}
if (parse_options(argc, argv) != 0)
if (parse_options(argc, argv, sock) != 0)
{
return 1;
goto failure;
}
if (getenv("COLLECTD_HOSTNAME") == NULL && getenv("COLLECTD_INTERVAL") == NULL)
@@ -666,29 +780,43 @@ int main(int argc, char ** argv)
LOG(LOG_DAEMON | LOG_NOTICE, "Collectd interval: %llu", collectd_interval_ull);
}
enum nDPIsrvd_connect_return connect_ret = nDPIsrvd_connect(sock);
if (setvbuf(stdout, NULL, _IONBF, 0) != 0)
{
LOG(LOG_DAEMON | LOG_ERR,
"Could not set stdout unbuffered: %s. Collectd may receive too old PUTVALs and complain.",
strerror(errno));
}
connect_ret = nDPIsrvd_connect(sock);
if (connect_ret != CONNECT_OK)
{
LOG(LOG_DAEMON | LOG_ERR, "nDPIsrvd socket connect to %s failed!", serv_optarg);
nDPIsrvd_free(&sock);
return 1;
goto failure;
}
if (nDPIsrvd_set_nonblock(sock) != 0)
{
LOG(LOG_DAEMON | LOG_ERR, "nDPIsrvd set nonblock failed: %s", strerror(errno));
goto failure;
}
signal(SIGINT, sighandler);
signal(SIGTERM, sighandler);
signal(SIGPIPE, SIG_IGN);
int epollfd = epoll_create1(0);
collectd_pid = getppid();
epollfd = epoll_create1(0);
if (epollfd < 0)
{
LOG(LOG_DAEMON | LOG_ERR, "Error creating epoll: %s", strerror(errno));
return 1;
goto failure;
}
if (create_collectd_timer() != 0)
{
LOG(LOG_DAEMON | LOG_ERR, "Error creating timer: %s", strerror(errno));
return 1;
goto failure;
}
{
@@ -696,7 +824,7 @@ int main(int argc, char ** argv)
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, collectd_timerfd, &timer_event) < 0)
{
LOG(LOG_DAEMON | LOG_ERR, "Error adding JSON fd to epoll: %s", strerror(errno));
return 1;
goto failure;
}
}
@@ -705,14 +833,20 @@ int main(int argc, char ** argv)
if (epoll_ctl(epollfd, EPOLL_CTL_ADD, sock->fd, &socket_event) < 0)
{
LOG(LOG_DAEMON | LOG_ERR, "Error adding nDPIsrvd socket fd to epoll: %s", strerror(errno));
return 1;
goto failure;
}
}
LOG(LOG_DAEMON | LOG_NOTICE, "%s", "Initialization succeeded.");
retval = mainloop(epollfd);
retval = mainloop(epollfd, sock);
nDPIsrvd_free(&sock);
if (getenv("COLLECTD_INTERVAL") == NULL)
{
print_collectd_exec_output();
}
failure:
nDPIsrvd_socket_free(&sock);
close(collectd_timerfd);
close(epollfd);
closelog();

View File

@@ -3,12 +3,12 @@ LoadPlugin exec
<Plugin exec>
Exec "ndpi" "/usr/bin/nDPIsrvd-collectd"
# Exec "ndpi" "/usr/bin/nDPIsrvd-collectd" "-s" "/tmp/ndpid-distributor.sock"
# Exec "ndpi" "/usr/bin/nDPIsrvd-collectd" "-s" "127.0.0.1:7000"
# Exec "ndpi" "/tmp/nDPIsrvd-collectd" "-s" "127.0.0.1:7000"
</Plugin>
# Uncomment for testing
LoadPlugin write_log
LoadPlugin rrdtool
<Plugin rrdtool>
DataDir "nDPIsrvd-collectd"
</Plugin>
#LoadPlugin write_log
#LoadPlugin rrdtool
#<Plugin rrdtool>
# DataDir "nDPIsrvd-collectd"
#</Plugin>

View File

@@ -1,65 +0,0 @@
# Add those types to collectd types.db
# e.g. `cat plugin_nDPIsrvd_types.db >>/usr/share/collectd/types.db'
# flow event counters
flow_new_count value:GAUGE:0:U
flow_end_count value:GAUGE:0:U
flow_idle_count value:GAUGE:0:U
flow_guessed_count value:GAUGE:0:U
flow_detected_count value:GAUGE:0:U
flow_detection_update_count value:GAUGE:0:U
flow_not_detected_count value:GAUGE:0:U
# flow additional counters
flow_packet_count value:GAUGE:0:U
flow_total_bytes value:GAUGE:0:U
flow_risky_count value:GAUGE:0:U
# flow breed counters
flow_breed_safe_count value:GAUGE:0:U
flow_breed_acceptable_count value:GAUGE:0:U
flow_breed_fun_count value:GAUGE:0:U
flow_breed_unsafe_count value:GAUGE:0:U
flow_breed_potentially_dangerous_count value:GAUGE:0:U
flow_breed_dangerous_count value:GAUGE:0:U
flow_breed_unrated_count value:GAUGE:0:U
flow_breed_unknown_count value:GAUGE:0:U
# flow category counters
flow_category_media_count value:GAUGE:0:U
flow_category_vpn_count value:GAUGE:0:U
flow_category_email_count value:GAUGE:0:U
flow_category_data_transfer_count value:GAUGE:0:U
flow_category_web_count value:GAUGE:0:U
flow_category_social_network_count value:GAUGE:0:U
flow_category_download_count value:GAUGE:0:U
flow_category_game_count value:GAUGE:0:U
flow_category_chat_count value:GAUGE:0:U
flow_category_voip_count value:GAUGE:0:U
flow_category_database_count value:GAUGE:0:U
flow_category_remote_access_count value:GAUGE:0:U
flow_category_cloud_count value:GAUGE:0:U
flow_category_network_count value:GAUGE:0:U
flow_category_collaborative_count value:GAUGE:0:U
flow_category_rpc_count value:GAUGE:0:U
flow_category_streaming_count value:GAUGE:0:U
flow_category_system_count value:GAUGE:0:U
flow_category_software_update_count value:GAUGE:0:U
flow_category_music_count value:GAUGE:0:U
flow_category_video_count value:GAUGE:0:U
flow_category_shopping_count value:GAUGE:0:U
flow_category_productivity_count value:GAUGE:0:U
flow_category_file_sharing_count value:GAUGE:0:U
flow_category_mining_count value:GAUGE:0:U
flow_category_malware_count value:GAUGE:0:U
flow_category_advertisment_count value:GAUGE:0:U
flow_category_other_count value:GAUGE:0:U
flow_category_unknown_count value:GAUGE:0:U
# flow l3 / l4 counters
flow_l3_ip4_count value:GAUGE:0:U
flow_l3_ip6_count value:GAUGE:0:U
flow_l3_other_count value:GAUGE:0:U
flow_l4_tcp_count value:GAUGE:0:U
flow_l4_udp_count value:GAUGE:0:U
flow_l4_other_count value:GAUGE:0:U

529
examples/c-collectd/rrdgraph.sh Executable file
View File

@@ -0,0 +1,529 @@
#!/usr/bin/env sh
RRDDIR="${1}"
OUTDIR="${2}"
RRDARGS="--width=800 --height=400"
REQUIRED_RRDCNT=106
if [ -z "${RRDDIR}" ]; then
printf '%s: Missing RRD directory which contains nDPIsrvd/Collectd files.\n' "${0}"
exit 1
fi
if [ -z "${OUTDIR}" ]; then
printf '%s: Missing Output directory which contains HTML files.\n' "${0}"
exit 1
fi
if [ $(ls -al ${RRDDIR}/gauge-flow_*.rrd | wc -l) -ne ${REQUIRED_RRDCNT} ]; then
printf '%s: Missing some *.rrd files. Expected: %s, Got: %s\n' "${0}" "${REQUIRED_RRDCNT}" "$(ls -al ${RRDDIR}/gauge-flow_*.rrd | wc -l)"
exit 1
fi
if [ ! -r "${OUTDIR}/index.html" -o ! -r "${OUTDIR}/flows.html" -o ! -r "${OUTDIR}/other.html" -o ! -r "${OUTDIR}/detections.html" -o ! -r "${OUTDIR}/categories.html" ]; then
printf '%s: Missing some *.html files.\n' "${0}"
exit 1
fi
TIME_PAST_HOUR="--start=-3600 --end=-0"
TIME_PAST_12HOURS="--start=-43200 --end=-0"
TIME_PAST_DAY="--start=-86400 --end=-0"
TIME_PAST_WEEK="--start=-604800 --end=-0"
TIME_PAST_MONTH="--start=-2419200 --end=-0"
TIME_PAST_3MONTHS="--start=-8035200 --end=-0"
TIME_PAST_YEAR="--start=-31536000 --end=-0"
rrdtool_graph_colorize_missing_data() {
printf 'CDEF:offline=%s,UN,INF,* AREA:offline#B3B3B311:' "${1}"
}
rrdtool_graph_print_cur_min_max_avg() {
printf 'GPRINT:%s:LAST:Current\:%%8.2lf ' "${1}"
printf 'GPRINT:%s:MIN:Minimum\:%%8.2lf ' "${1}"
printf 'GPRINT:%s:MAX:Maximum\:%%8.2lf ' "${1}"
printf 'GPRINT:%s:AVERAGE:Average\:%%8.2lf\\n' "${1}"
}
rrdtool_graph() {
TITLE="${1}"
shift
YAXIS_NAME="${1}"
shift
OUTPNG="${1}"
shift
rrdtool graph ${RRDARGS} -t "${TITLE} (past hour)" -v ${YAXIS_NAME} -Y ${TIME_PAST_HOUR} "${OUTPNG}_past_hour.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past 12 hours)" -v ${YAXIS_NAME} -Y ${TIME_PAST_12HOURS} "${OUTPNG}_past_12hours.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past day)" -v ${YAXIS_NAME} -Y ${TIME_PAST_DAY} "${OUTPNG}_past_day.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past week)" -v ${YAXIS_NAME} -Y ${TIME_PAST_WEEK} "${OUTPNG}_past_week.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past month)" -v ${YAXIS_NAME} -Y ${TIME_PAST_MONTH} "${OUTPNG}_past_month.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past 3 months)" -v ${YAXIS_NAME} -Y ${TIME_PAST_3MONTHS} "${OUTPNG}_past_month.png" ${*}
rrdtool graph ${RRDARGS} -t "${TITLE} (past year)" -v ${YAXIS_NAME} -Y ${TIME_PAST_YEAR} "${OUTPNG}_past_year.png" ${*}
}
rrdtool_graph Flows Amount "${OUTDIR}/flows" \
DEF:flows_new=${RRDDIR}/gauge-flow_new_count.rrd:value:AVERAGE \
DEF:flows_end=${RRDDIR}/gauge-flow_end_count.rrd:value:AVERAGE \
DEF:flows_idle=${RRDDIR}/gauge-flow_idle_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data flows_new) \
AREA:flows_new#54EC48::STACK \
AREA:flows_end#ECD748::STACK \
AREA:flows_idle#EC9D48::STACK \
LINE2:flows_new#24BC14:"New." \
$(rrdtool_graph_print_cur_min_max_avg flows_new) \
LINE2:flows_end#C9B215:"End." \
$(rrdtool_graph_print_cur_min_max_avg flows_end) \
LINE2:flows_idle#CC7016:"Idle" \
$(rrdtool_graph_print_cur_min_max_avg flows_idle)
rrdtool_graph Detections Amount "${OUTDIR}/detections" \
DEF:flows_detected=${RRDDIR}/gauge-flow_detected_count.rrd:value:AVERAGE \
DEF:flows_guessed=${RRDDIR}/gauge-flow_guessed_count.rrd:value:AVERAGE \
DEF:flows_not_detected=${RRDDIR}/gauge-flow_not_detected_count.rrd:value:AVERAGE \
DEF:flows_detection_update=${RRDDIR}/gauge-flow_detection_update_count.rrd:value:AVERAGE \
DEF:flows_risky=${RRDDIR}/gauge-flow_risky_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data flows_detected) \
AREA:flows_detected#00bfff::STACK \
AREA:flows_detection_update#a1b8c4::STACK \
AREA:flows_guessed#ffff4d::STACK \
AREA:flows_not_detected#ffa64d::STACK \
AREA:flows_risky#ff4000::STACK \
LINE2:flows_detected#0000ff:"Detected........" \
$(rrdtool_graph_print_cur_min_max_avg flows_detected) \
LINE2:flows_guessed#cccc00:"Guessed........." \
$(rrdtool_graph_print_cur_min_max_avg flows_guessed) \
LINE2:flows_not_detected#ff8000:"Not-Detected...." \
$(rrdtool_graph_print_cur_min_max_avg flows_not_detected) \
LINE2:flows_detection_update#4f6e7d:"Detection-Update" \
$(rrdtool_graph_print_cur_min_max_avg flows_detection_update) \
LINE2:flows_risky#b32d00:"Risky..........." \
$(rrdtool_graph_print_cur_min_max_avg flows_risky)
rrdtool_graph "Traffic (IN/OUT)" Bytes "${OUTDIR}/traffic" \
DEF:total_src_bytes=${RRDDIR}/gauge-flow_src_total_bytes.rrd:value:AVERAGE \
DEF:total_dst_bytes=${RRDDIR}/gauge-flow_dst_total_bytes.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data total_src_bytes) \
AREA:total_src_bytes#00cc99:"Total-Bytes-Source2Dest":STACK \
$(rrdtool_graph_print_cur_min_max_avg total_src_bytes) \
STACK:total_dst_bytes#669999:"Total-Bytes-Dest2Source" \
$(rrdtool_graph_print_cur_min_max_avg total_dst_bytes)
rrdtool_graph Layer3-Flows Amount "${OUTDIR}/layer3" \
DEF:layer3_ip4=${RRDDIR}/gauge-flow_l3_ip4_count.rrd:value:AVERAGE \
DEF:layer3_ip6=${RRDDIR}/gauge-flow_l3_ip6_count.rrd:value:AVERAGE \
DEF:layer3_other=${RRDDIR}/gauge-flow_l3_other_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data layer3_ip4) \
AREA:layer3_ip4#73d97d::STACK \
AREA:layer3_ip6#66b3ff::STACK \
AREA:layer3_other#bea1c4::STACK \
LINE2:layer3_ip4#21772a:"IPv4." \
$(rrdtool_graph_print_cur_min_max_avg layer3_ip4) \
LINE2:layer3_ip6#0066cc:"IPv6." \
$(rrdtool_graph_print_cur_min_max_avg layer3_ip6) \
LINE2:layer3_other#92629d:"Other" \
$(rrdtool_graph_print_cur_min_max_avg layer3_other)
rrdtool_graph Layer4-Flows Amount "${OUTDIR}/layer4" \
DEF:layer4_tcp=${RRDDIR}/gauge-flow_l4_tcp_count.rrd:value:AVERAGE \
DEF:layer4_udp=${RRDDIR}/gauge-flow_l4_udp_count.rrd:value:AVERAGE \
DEF:layer4_icmp=${RRDDIR}/gauge-flow_l4_icmp_count.rrd:value:AVERAGE \
DEF:layer4_other=${RRDDIR}/gauge-flow_l4_other_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data layer4_tcp) \
AREA:layer4_tcp#73d97d::STACK \
AREA:layer4_udp#66b3ff::STACK \
AREA:layer4_icmp#ee5d9a::STACK \
AREA:layer4_other#bea1c4::STACK \
LINE2:layer4_tcp#21772a:"TCP.." \
$(rrdtool_graph_print_cur_min_max_avg layer4_tcp) \
LINE2:layer4_udp#0066cc:"UDP.." \
$(rrdtool_graph_print_cur_min_max_avg layer4_udp) \
LINE2:layer4_icmp#d01663:"ICMP." \
$(rrdtool_graph_print_cur_min_max_avg layer4_icmp) \
LINE2:layer4_other#83588d:"Other" \
$(rrdtool_graph_print_cur_min_max_avg layer4_other)
rrdtool_graph Flow-Breeds Amount "${OUTDIR}/breed" \
DEF:breed_safe=${RRDDIR}/gauge-flow_breed_safe_count.rrd:value:AVERAGE \
DEF:breed_acceptable=${RRDDIR}/gauge-flow_breed_acceptable_count.rrd:value:AVERAGE \
DEF:breed_fun=${RRDDIR}/gauge-flow_breed_fun_count.rrd:value:AVERAGE \
DEF:breed_unsafe=${RRDDIR}/gauge-flow_breed_unsafe_count.rrd:value:AVERAGE \
DEF:breed_potentially_dangerous=${RRDDIR}/gauge-flow_breed_potentially_dangerous_count.rrd:value:AVERAGE \
DEF:breed_dangerous=${RRDDIR}/gauge-flow_breed_dangerous_count.rrd:value:AVERAGE \
DEF:breed_unrated=${RRDDIR}/gauge-flow_breed_unrated_count.rrd:value:AVERAGE \
DEF:breed_unknown=${RRDDIR}/gauge-flow_breed_unknown_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data breed_safe) \
AREA:breed_safe#4dff4d::STACK \
AREA:breed_acceptable#c2ff33::STACK \
AREA:breed_fun#ffe433::STACK \
AREA:breed_unsafe#ffb133::STACK \
AREA:breed_potentially_dangerous#ff5f33::STACK \
AREA:breed_dangerous#e74b5b::STACK \
AREA:breed_unrated#a5aca0::STACK \
AREA:breed_unknown#d7c1cc::STACK \
LINE2:breed_safe#00e600:"Safe................." \
$(rrdtool_graph_print_cur_min_max_avg breed_safe) \
LINE2:breed_acceptable#8fce00:"Acceptable..........." \
$(rrdtool_graph_print_cur_min_max_avg breed_acceptable) \
LINE2:breed_fun#e6c700:"Fun.................." \
$(rrdtool_graph_print_cur_min_max_avg breed_fun) \
LINE2:breed_unsafe#e68e00:"Unsafe..............." \
$(rrdtool_graph_print_cur_min_max_avg breed_unsafe) \
LINE2:breed_potentially_dangerous#e63200:"Potentially-Dangerous" \
$(rrdtool_graph_print_cur_min_max_avg breed_potentially_dangerous) \
LINE2:breed_dangerous#c61b2b:"Dangerous............" \
$(rrdtool_graph_print_cur_min_max_avg breed_dangerous) \
LINE2:breed_unrated#7e8877:"Unrated.............." \
$(rrdtool_graph_print_cur_min_max_avg breed_unrated) \
LINE2:breed_unknown#ae849a:"Unknown.............." \
$(rrdtool_graph_print_cur_min_max_avg breed_unknown)
rrdtool_graph Flow-Categories 'Amount(SUM)' "${OUTDIR}/categories" \
DEF:cat_ads=${RRDDIR}/gauge-flow_category_advertisment_count.rrd:value:AVERAGE \
DEF:cat_chat=${RRDDIR}/gauge-flow_category_chat_count.rrd:value:AVERAGE \
DEF:cat_cloud=${RRDDIR}/gauge-flow_category_cloud_count.rrd:value:AVERAGE \
DEF:cat_collab=${RRDDIR}/gauge-flow_category_collaborative_count.rrd:value:AVERAGE \
DEF:cat_xfer=${RRDDIR}/gauge-flow_category_data_transfer_count.rrd:value:AVERAGE \
DEF:cat_db=${RRDDIR}/gauge-flow_category_database_count.rrd:value:AVERAGE \
DEF:cat_dl=${RRDDIR}/gauge-flow_category_download_count.rrd:value:AVERAGE \
DEF:cat_mail=${RRDDIR}/gauge-flow_category_email_count.rrd:value:AVERAGE \
DEF:cat_fs=${RRDDIR}/gauge-flow_category_file_sharing_count.rrd:value:AVERAGE \
DEF:cat_game=${RRDDIR}/gauge-flow_category_game_count.rrd:value:AVERAGE \
DEF:cat_mal=${RRDDIR}/gauge-flow_category_malware_count.rrd:value:AVERAGE \
DEF:cat_med=${RRDDIR}/gauge-flow_category_media_count.rrd:value:AVERAGE \
DEF:cat_min=${RRDDIR}/gauge-flow_category_mining_count.rrd:value:AVERAGE \
DEF:cat_mus=${RRDDIR}/gauge-flow_category_music_count.rrd:value:AVERAGE \
DEF:cat_net=${RRDDIR}/gauge-flow_category_network_count.rrd:value:AVERAGE \
DEF:cat_prod=${RRDDIR}/gauge-flow_category_productivity_count.rrd:value:AVERAGE \
DEF:cat_rem=${RRDDIR}/gauge-flow_category_remote_access_count.rrd:value:AVERAGE \
DEF:cat_rpc=${RRDDIR}/gauge-flow_category_rpc_count.rrd:value:AVERAGE \
DEF:cat_shop=${RRDDIR}/gauge-flow_category_shopping_count.rrd:value:AVERAGE \
DEF:cat_soc=${RRDDIR}/gauge-flow_category_social_network_count.rrd:value:AVERAGE \
DEF:cat_soft=${RRDDIR}/gauge-flow_category_software_update_count.rrd:value:AVERAGE \
DEF:cat_str=${RRDDIR}/gauge-flow_category_streaming_count.rrd:value:AVERAGE \
DEF:cat_sys=${RRDDIR}/gauge-flow_category_system_count.rrd:value:AVERAGE \
DEF:cat_ukn=${RRDDIR}/gauge-flow_category_unknown_count.rrd:value:AVERAGE \
DEF:cat_vid=${RRDDIR}/gauge-flow_category_video_count.rrd:value:AVERAGE \
DEF:cat_voip=${RRDDIR}/gauge-flow_category_voip_count.rrd:value:AVERAGE \
DEF:cat_vpn=${RRDDIR}/gauge-flow_category_vpn_count.rrd:value:AVERAGE \
DEF:cat_web=${RRDDIR}/gauge-flow_category_web_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data cat_ads) \
AREA:cat_ads#f1c232:"Advertisment..........." \
$(rrdtool_graph_print_cur_min_max_avg cat_ads) \
STACK:cat_chat#6fa8dc:"Chat..................." \
$(rrdtool_graph_print_cur_min_max_avg cat_chat) \
STACK:cat_cloud#2986cc:"Cloud.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_cloud) \
STACK:cat_collab#3212aa:"Collaborative.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_collab) \
STACK:cat_xfer#16537e:"Data-Transfer.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_xfer) \
STACK:cat_db#cc0000:"Database..............." \
$(rrdtool_graph_print_cur_min_max_avg cat_db) \
STACK:cat_dl#6a329f:"Download..............." \
$(rrdtool_graph_print_cur_min_max_avg cat_dl) \
STACK:cat_mail#3600cc:"Mail..................." \
$(rrdtool_graph_print_cur_min_max_avg cat_mail) \
STACK:cat_fs#c90076:"File-Sharing..........." \
$(rrdtool_graph_print_cur_min_max_avg cat_fs) \
STACK:cat_game#00ff26:"Game..................." \
$(rrdtool_graph_print_cur_min_max_avg cat_game) \
STACK:cat_mal#f44336:"Malware................" \
$(rrdtool_graph_print_cur_min_max_avg cat_mal) \
STACK:cat_med#ff8300:"Media.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_med) \
STACK:cat_min#ff0000:"Mining................." \
$(rrdtool_graph_print_cur_min_max_avg cat_min) \
STACK:cat_mus#00fff0:"Music.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_mus) \
STACK:cat_net#ddff00:"Network................" \
$(rrdtool_graph_print_cur_min_max_avg cat_net) \
STACK:cat_prod#29ff00:"Productivity..........." \
$(rrdtool_graph_print_cur_min_max_avg cat_prod) \
STACK:cat_rem#b52c2c:"Remote-Access.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_rem) \
STACK:cat_rpc#e15a5a:"Remote-Procedure-Call.." \
$(rrdtool_graph_print_cur_min_max_avg cat_rpc) \
STACK:cat_shop#0065ff:"Shopping..............." \
$(rrdtool_graph_print_cur_min_max_avg cat_shop) \
STACK:cat_soc#8fce00:"Social-Network........." \
$(rrdtool_graph_print_cur_min_max_avg cat_soc) \
STACK:cat_soft#007a0d:"Software-Update........" \
$(rrdtool_graph_print_cur_min_max_avg cat_soft) \
STACK:cat_str#ff00b8:"Streaming.............." \
$(rrdtool_graph_print_cur_min_max_avg cat_str) \
STACK:cat_sys#f4ff00:"System................." \
$(rrdtool_graph_print_cur_min_max_avg cat_sys) \
STACK:cat_ukn#999999:"Unknown................" \
$(rrdtool_graph_print_cur_min_max_avg cat_ukn) \
STACK:cat_vid#518820:"Video.................." \
$(rrdtool_graph_print_cur_min_max_avg cat_vid) \
STACK:cat_voip#ffc700:"Voice-Over-IP.........." \
$(rrdtool_graph_print_cur_min_max_avg cat_voip) \
STACK:cat_vpn#378035:"Virtual-Private-Network" \
$(rrdtool_graph_print_cur_min_max_avg cat_vpn) \
STACK:cat_web#00fffb:"Web...................." \
$(rrdtool_graph_print_cur_min_max_avg cat_web)
rrdtool_graph JSON 'Lines' "${OUTDIR}/json_lines" \
DEF:json_lines=${RRDDIR}/gauge-json_lines.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data json_lines) \
AREA:json_lines#4dff4d::STACK \
LINE2:json_lines#00e600:"JSON-lines" \
$(rrdtool_graph_print_cur_min_max_avg json_lines)
rrdtool_graph JSON 'Bytes' "${OUTDIR}/json_bytes" \
DEF:json_bytes=${RRDDIR}/gauge-json_bytes.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data json_bytes) \
AREA:json_bytes#4dff4d::STACK \
LINE2:json_bytes#00e600:"JSON-bytes" \
$(rrdtool_graph_print_cur_min_max_avg json_bytes)
rrdtool_graph Events 'Amouunt' "${OUTDIR}/events" \
DEF:init=${RRDDIR}/gauge-init_count.rrd:value:AVERAGE \
DEF:reconnect=${RRDDIR}/gauge-reconnect_count.rrd:value:AVERAGE \
DEF:shutdown=${RRDDIR}/gauge-shutdown_count.rrd:value:AVERAGE \
DEF:status=${RRDDIR}/gauge-status_count.rrd:value:AVERAGE \
DEF:packet=${RRDDIR}/gauge-packet_count.rrd:value:AVERAGE \
DEF:packet_flow=${RRDDIR}/gauge-packet_flow_count.rrd:value:AVERAGE \
DEF:new=${RRDDIR}/gauge-flow_new_count.rrd:value:AVERAGE \
DEF:end=${RRDDIR}/gauge-flow_end_count.rrd:value:AVERAGE \
DEF:idle=${RRDDIR}/gauge-flow_idle_count.rrd:value:AVERAGE \
DEF:update=${RRDDIR}/gauge-flow_update_count.rrd:value:AVERAGE \
DEF:detection_update=${RRDDIR}/gauge-flow_detection_update_count.rrd:value:AVERAGE \
DEF:guessed=${RRDDIR}/gauge-flow_guessed_count.rrd:value:AVERAGE \
DEF:detected=${RRDDIR}/gauge-flow_detected_count.rrd:value:AVERAGE \
DEF:not_detected=${RRDDIR}/gauge-flow_not_detected_count.rrd:value:AVERAGE \
DEF:analyse=${RRDDIR}/gauge-flow_analyse_count.rrd:value:AVERAGE \
DEF:error=${RRDDIR}/gauge-error_count_sum.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data init) \
AREA:init#f1c232:"Init..................." \
$(rrdtool_graph_print_cur_min_max_avg init) \
STACK:reconnect#63bad9:"Reconnect.............." \
$(rrdtool_graph_print_cur_min_max_avg reconnect) \
STACK:shutdown#3a6f82:"Shutdown..............." \
$(rrdtool_graph_print_cur_min_max_avg shutdown) \
STACK:status#b7cbd1:"Status................." \
$(rrdtool_graph_print_cur_min_max_avg status) \
STACK:packet#0aff3f:"Packet................." \
$(rrdtool_graph_print_cur_min_max_avg packet) \
STACK:packet_flow#00c72b:"Packet-Flow............" \
$(rrdtool_graph_print_cur_min_max_avg packet_flow) \
STACK:new#c76700:"New...................." \
$(rrdtool_graph_print_cur_min_max_avg new) \
STACK:end#c78500:"End...................." \
$(rrdtool_graph_print_cur_min_max_avg end) \
STACK:idle#c7a900:"Idle..................." \
$(rrdtool_graph_print_cur_min_max_avg idle) \
STACK:update#c7c400:"Updates................" \
$(rrdtool_graph_print_cur_min_max_avg update) \
STACK:detection_update#a2c700:"Detection-Updates......" \
$(rrdtool_graph_print_cur_min_max_avg detection_update) \
STACK:guessed#7bc700:"Guessed................" \
$(rrdtool_graph_print_cur_min_max_avg guessed) \
STACK:detected#00c781:"Detected..............." \
$(rrdtool_graph_print_cur_min_max_avg detected) \
STACK:not_detected#00bdc7:"Not-Detected..........." \
$(rrdtool_graph_print_cur_min_max_avg not_detected) \
STACK:analyse#1400c7:"Analyse................" \
$(rrdtool_graph_print_cur_min_max_avg analyse) \
STACK:error#c70000:"Error.................." \
$(rrdtool_graph_print_cur_min_max_avg error)
rrdtool_graph Error-Events 'Amouunt' "${OUTDIR}/error_events" \
DEF:error_0=${RRDDIR}/gauge-error_0_count.rrd:value:AVERAGE \
DEF:error_1=${RRDDIR}/gauge-error_1_count.rrd:value:AVERAGE \
DEF:error_2=${RRDDIR}/gauge-error_2_count.rrd:value:AVERAGE \
DEF:error_3=${RRDDIR}/gauge-error_3_count.rrd:value:AVERAGE \
DEF:error_4=${RRDDIR}/gauge-error_4_count.rrd:value:AVERAGE \
DEF:error_5=${RRDDIR}/gauge-error_5_count.rrd:value:AVERAGE \
DEF:error_6=${RRDDIR}/gauge-error_6_count.rrd:value:AVERAGE \
DEF:error_7=${RRDDIR}/gauge-error_7_count.rrd:value:AVERAGE \
DEF:error_8=${RRDDIR}/gauge-error_8_count.rrd:value:AVERAGE \
DEF:error_9=${RRDDIR}/gauge-error_9_count.rrd:value:AVERAGE \
DEF:error_10=${RRDDIR}/gauge-error_10_count.rrd:value:AVERAGE \
DEF:error_11=${RRDDIR}/gauge-error_11_count.rrd:value:AVERAGE \
DEF:error_12=${RRDDIR}/gauge-error_12_count.rrd:value:AVERAGE \
DEF:error_13=${RRDDIR}/gauge-error_13_count.rrd:value:AVERAGE \
DEF:error_14=${RRDDIR}/gauge-error_14_count.rrd:value:AVERAGE \
DEF:error_15=${RRDDIR}/gauge-error_15_count.rrd:value:AVERAGE \
DEF:error_16=${RRDDIR}/gauge-error_16_count.rrd:value:AVERAGE \
DEF:error_unknown=${RRDDIR}/gauge-error_unknown_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data error_0) \
AREA:error_0#ff6a00:"Unknown-datalink-layer-packet............................" \
$(rrdtool_graph_print_cur_min_max_avg error_0) \
STACK:error_1#bf7540:"Unknown-L3-protocol......................................" \
$(rrdtool_graph_print_cur_min_max_avg error_1) \
STACK:error_2#ffd500:"Unsupported-datalink-layer..............................." \
$(rrdtool_graph_print_cur_min_max_avg error_2) \
STACK:error_3#bfaa40:"Packet-too-short........................................." \
$(rrdtool_graph_print_cur_min_max_avg error_3) \
STACK:error_4#bfff00:"Unknown-packet-type......................................" \
$(rrdtool_graph_print_cur_min_max_avg error_4) \
STACK:error_5#9fbf40:"Packet-header-invalid...................................." \
$(rrdtool_graph_print_cur_min_max_avg error_5) \
STACK:error_6#55ff00:"IP4-packet-too-short....................................." \
$(rrdtool_graph_print_cur_min_max_avg error_6) \
STACK:error_7#6abf40:"Packet-smaller-than-IP4-header..........................." \
$(rrdtool_graph_print_cur_min_max_avg error_7) \
STACK:error_8#00ff15:"nDPI-IPv4/L4-payload-detection-failed...................." \
$(rrdtool_graph_print_cur_min_max_avg error_8) \
STACK:error_9#40bf4a:"IP6-packet-too-short....................................." \
$(rrdtool_graph_print_cur_min_max_avg error_9) \
STACK:error_10#00ff80:"Packet-smaller-than-IP6-header..........................." \
$(rrdtool_graph_print_cur_min_max_avg error_10) \
STACK:error_11#40bf80:"nDPI-IPv6/L4-payload-detection-failed...................." \
$(rrdtool_graph_print_cur_min_max_avg error_11) \
STACK:error_12#00ffea:"TCP-packet-smaller-than-expected........................." \
$(rrdtool_graph_print_cur_min_max_avg error_12) \
STACK:error_13#40bfb5:"UDP-packet-smaller-than-expected........................." \
$(rrdtool_graph_print_cur_min_max_avg error_13) \
STACK:error_14#00aaff:"Captured-packet-size-is-smaller-than-expected-packet-size" \
$(rrdtool_graph_print_cur_min_max_avg error_14) \
STACK:error_15#4095bf:"Max-flows-to-track-reached..............................." \
$(rrdtool_graph_print_cur_min_max_avg error_15) \
STACK:error_16#0040ff:"Flow-memory-allocation-failed............................" \
$(rrdtool_graph_print_cur_min_max_avg error_16) \
STACK:error_unknown#4060bf:"Unknown-error............................................" \
$(rrdtool_graph_print_cur_min_max_avg error_unknown)
rrdtool_graph Risky-Events 'Amouunt' "${OUTDIR}/risky_events" \
DEF:risk_1=${RRDDIR}/gauge-flow_risk_1_count.rrd:value:AVERAGE \
DEF:risk_2=${RRDDIR}/gauge-flow_risk_2_count.rrd:value:AVERAGE \
DEF:risk_3=${RRDDIR}/gauge-flow_risk_3_count.rrd:value:AVERAGE \
DEF:risk_4=${RRDDIR}/gauge-flow_risk_4_count.rrd:value:AVERAGE \
DEF:risk_5=${RRDDIR}/gauge-flow_risk_5_count.rrd:value:AVERAGE \
DEF:risk_6=${RRDDIR}/gauge-flow_risk_6_count.rrd:value:AVERAGE \
DEF:risk_7=${RRDDIR}/gauge-flow_risk_7_count.rrd:value:AVERAGE \
DEF:risk_8=${RRDDIR}/gauge-flow_risk_8_count.rrd:value:AVERAGE \
DEF:risk_9=${RRDDIR}/gauge-flow_risk_9_count.rrd:value:AVERAGE \
DEF:risk_10=${RRDDIR}/gauge-flow_risk_10_count.rrd:value:AVERAGE \
DEF:risk_11=${RRDDIR}/gauge-flow_risk_11_count.rrd:value:AVERAGE \
DEF:risk_12=${RRDDIR}/gauge-flow_risk_12_count.rrd:value:AVERAGE \
DEF:risk_13=${RRDDIR}/gauge-flow_risk_13_count.rrd:value:AVERAGE \
DEF:risk_14=${RRDDIR}/gauge-flow_risk_14_count.rrd:value:AVERAGE \
DEF:risk_15=${RRDDIR}/gauge-flow_risk_15_count.rrd:value:AVERAGE \
DEF:risk_16=${RRDDIR}/gauge-flow_risk_16_count.rrd:value:AVERAGE \
DEF:risk_17=${RRDDIR}/gauge-flow_risk_17_count.rrd:value:AVERAGE \
DEF:risk_18=${RRDDIR}/gauge-flow_risk_18_count.rrd:value:AVERAGE \
DEF:risk_19=${RRDDIR}/gauge-flow_risk_19_count.rrd:value:AVERAGE \
DEF:risk_20=${RRDDIR}/gauge-flow_risk_20_count.rrd:value:AVERAGE \
DEF:risk_21=${RRDDIR}/gauge-flow_risk_21_count.rrd:value:AVERAGE \
DEF:risk_22=${RRDDIR}/gauge-flow_risk_22_count.rrd:value:AVERAGE \
DEF:risk_23=${RRDDIR}/gauge-flow_risk_23_count.rrd:value:AVERAGE \
DEF:risk_24=${RRDDIR}/gauge-flow_risk_24_count.rrd:value:AVERAGE \
DEF:risk_25=${RRDDIR}/gauge-flow_risk_25_count.rrd:value:AVERAGE \
DEF:risk_26=${RRDDIR}/gauge-flow_risk_26_count.rrd:value:AVERAGE \
DEF:risk_27=${RRDDIR}/gauge-flow_risk_27_count.rrd:value:AVERAGE \
DEF:risk_28=${RRDDIR}/gauge-flow_risk_28_count.rrd:value:AVERAGE \
DEF:risk_29=${RRDDIR}/gauge-flow_risk_29_count.rrd:value:AVERAGE \
DEF:risk_30=${RRDDIR}/gauge-flow_risk_30_count.rrd:value:AVERAGE \
DEF:risk_31=${RRDDIR}/gauge-flow_risk_31_count.rrd:value:AVERAGE \
DEF:risk_32=${RRDDIR}/gauge-flow_risk_32_count.rrd:value:AVERAGE \
DEF:risk_33=${RRDDIR}/gauge-flow_risk_33_count.rrd:value:AVERAGE \
DEF:risk_34=${RRDDIR}/gauge-flow_risk_34_count.rrd:value:AVERAGE \
DEF:risk_35=${RRDDIR}/gauge-flow_risk_35_count.rrd:value:AVERAGE \
DEF:risk_36=${RRDDIR}/gauge-flow_risk_36_count.rrd:value:AVERAGE \
DEF:risk_37=${RRDDIR}/gauge-flow_risk_37_count.rrd:value:AVERAGE \
DEF:risk_38=${RRDDIR}/gauge-flow_risk_38_count.rrd:value:AVERAGE \
DEF:risk_39=${RRDDIR}/gauge-flow_risk_39_count.rrd:value:AVERAGE \
DEF:risk_40=${RRDDIR}/gauge-flow_risk_40_count.rrd:value:AVERAGE \
DEF:risk_41=${RRDDIR}/gauge-flow_risk_41_count.rrd:value:AVERAGE \
DEF:risk_42=${RRDDIR}/gauge-flow_risk_42_count.rrd:value:AVERAGE \
DEF:risk_43=${RRDDIR}/gauge-flow_risk_43_count.rrd:value:AVERAGE \
DEF:risk_44=${RRDDIR}/gauge-flow_risk_44_count.rrd:value:AVERAGE \
DEF:risk_45=${RRDDIR}/gauge-flow_risk_45_count.rrd:value:AVERAGE \
DEF:risk_46=${RRDDIR}/gauge-flow_risk_46_count.rrd:value:AVERAGE \
DEF:risk_47=${RRDDIR}/gauge-flow_risk_47_count.rrd:value:AVERAGE \
DEF:risk_48=${RRDDIR}/gauge-flow_risk_48_count.rrd:value:AVERAGE \
DEF:risk_49=${RRDDIR}/gauge-flow_risk_49_count.rrd:value:AVERAGE \
DEF:risk_unknown=${RRDDIR}/gauge-flow_risk_unknown_count.rrd:value:AVERAGE \
$(rrdtool_graph_colorize_missing_data risk_1) \
AREA:risk_1#ff0000:"XSS-Attack..............................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_1) \
STACK:risk_2#ff5500:"SQL-Injection............................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_2) \
STACK:risk_3#ffaa00:"RCE-Injection............................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_3) \
STACK:risk_4#ffff00:"Binary-App-Transfer......................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_4) \
STACK:risk_5#aaff00:"Known-Proto-on-Non-Std-Port.............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_5) \
STACK:risk_6#55ff00:"Self-signed-Cert........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_6) \
STACK:risk_7#00ff55:"Obsolete-TLS-v1.1-or-older..............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_7) \
STACK:risk_8#00ffaa:"Weak-TLS-Cipher.........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_8) \
STACK:risk_9#00ffff:"TLS-Cert-Expired........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_9) \
STACK:risk_10#00aaff:"TLS-Cert-Mismatch........................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_10) \
STACK:risk_11#0055ff:"HTTP-Suspicious-User-Agent..............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_11) \
STACK:risk_12#0000ff:"HTTP-Numeric-IP-Address.................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_12) \
STACK:risk_13#5500ff:"HTTP-Suspicious-URL......................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_13) \
STACK:risk_14#aa00ff:"HTTP-Suspicious-Header..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_14) \
STACK:risk_15#ff00ff:"TLS-probably-Not-Carrying-HTTPS.........................." \
$(rrdtool_graph_print_cur_min_max_avg risk_15) \
STACK:risk_16#ff00aa:"Suspicious-DGA-Domain-name..............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_16) \
STACK:risk_17#ff0055:"Malformed-Packet........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_17) \
STACK:risk_18#602020:"SSH-Obsolete-Client-Version/Cipher......................." \
$(rrdtool_graph_print_cur_min_max_avg risk_18) \
STACK:risk_19#603a20:"SSH-Obsolete-Server-Version/Cipher......................." \
$(rrdtool_graph_print_cur_min_max_avg risk_19) \
STACK:risk_20#605520:"SMB-Insecure-Version....................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_20) \
STACK:risk_21#506020:"TLS-Suspicious-ESNI-Usage................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_21) \
STACK:risk_22#356020:"Unsafe-Protocol.........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_22) \
STACK:risk_23#206025:"Suspicious-DNS-Traffic..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_23) \
STACK:risk_24#206040:"Missing-SNI-TLS-Extension................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_24) \
STACK:risk_25#20605a:"HTTP-Suspicious-Content.................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_25) \
STACK:risk_26#204a60:"Risky-ASN................................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_26) \
STACK:risk_27#203060:"Risky-Domain-Name........................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_27) \
STACK:risk_28#2a2060:"Malicious-JA3-Fingerprint................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_28) \
STACK:risk_29#452060:"Malicious-SSL-Cert/SHA1-Fingerprint......................" \
$(rrdtool_graph_print_cur_min_max_avg risk_29) \
STACK:risk_30#602060:"Desktop/File-Sharing....................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_30) \
STACK:risk_31#602045:"Uncommon-TLS-ALPN........................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_31) \
STACK:risk_32#df2020:"TLS-Cert-Validity-Too-Long..............................." \
$(rrdtool_graph_print_cur_min_max_avg risk_32) \
STACK:risk_33#df6020:"TLS-Suspicious-Extension................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_33) \
STACK:risk_34#df9f20:"TLS-Fatal-Alert.........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_34) \
STACK:risk_35#dfdf20:"Suspicious-Entropy......................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_35) \
STACK:risk_36#9fdf20:"Clear-Text-Credentials..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_36) \
STACK:risk_37#60df20:"Large-DNS-Packet........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_37) \
STACK:risk_38#20df20:"Fragmented-DNS-Message..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_38) \
STACK:risk_39#20df60:"Text-With-Non-Printable-Chars............................" \
$(rrdtool_graph_print_cur_min_max_avg risk_39) \
STACK:risk_40#20df9f:"Possible-Exploit........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_40) \
STACK:risk_41#20dfdf:"TLS-Cert-About-To-Expire................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_41) \
STACK:risk_42#209fdf:"IDN-Domain-Name.........................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_42) \
STACK:risk_43#2060df:"Error-Code..............................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_43) \
STACK:risk_44#2020df:"Crawler/Bot.............................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_44) \
STACK:risk_45#6020df:"Anonymous-Subscriber....................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_45) \
STACK:risk_46#9f20df:"Unidirectional-Traffic..................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_46) \
STACK:risk_47#df20df:"HTTP-Obsolete-Server....................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_47) \
STACK:risk_48#df68df:"Periodic-Flow............................................" \
$(rrdtool_graph_print_cur_min_max_avg risk_48) \
STACK:risk_49#dfffdf:"Minor-Issues............................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_49) \
STACK:risk_unknown#df2060:"Unknown.................................................." \
$(rrdtool_graph_print_cur_min_max_avg risk_unknown)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,186 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

View File

@@ -0,0 +1,93 @@
body {
font-size: .875rem;
}
.feather {
width: 16px;
height: 16px;
vertical-align: text-bottom;
}
/*
* Sidebar
*/
.sidebar {
position: fixed;
top: 0;
bottom: 0;
left: 0;
z-index: 100; /* Behind the navbar */
padding: 0;
box-shadow: inset -1px 0 0 rgba(0, 0, 0, .1);
}
.sidebar-sticky {
position: -webkit-sticky;
position: sticky;
top: 48px; /* Height of navbar */
height: calc(100vh - 48px);
padding-top: .5rem;
overflow-x: hidden;
overflow-y: auto; /* Scrollable contents if viewport is shorter than content. */
}
.sidebar .nav-link {
font-weight: 500;
color: #333;
}
.sidebar .nav-link .feather {
margin-right: 4px;
color: #999;
}
.sidebar .nav-link.active {
color: #007bff;
}
.sidebar .nav-link:hover .feather,
.sidebar .nav-link.active .feather {
color: inherit;
}
.sidebar-heading {
font-size: .75rem;
text-transform: uppercase;
}
/*
* Navbar
*/
.navbar-brand {
padding-top: .75rem;
padding-bottom: .75rem;
font-size: 1rem;
background-color: rgba(0, 0, 0, .25);
box-shadow: inset -1px 0 0 rgba(0, 0, 0, .25);
}
.navbar .form-control {
padding: .75rem 1rem;
border-width: 0;
border-radius: 0;
}
.form-control-dark {
color: #fff;
background-color: rgba(255, 255, 255, .1);
border-color: rgba(255, 255, 255, .1);
}
.form-control-dark:focus {
border-color: transparent;
box-shadow: 0 0 0 3px rgba(255, 255, 255, .25);
}
/*
* Utilities
*/
.border-top { border-top: 1px solid #e5e5e5; }
.border-bottom { border-bottom: 1px solid #e5e5e5; }

View File

@@ -0,0 +1,167 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

View File

@@ -0,0 +1,205 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,167 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

View File

@@ -0,0 +1,375 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link active" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="flows_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="detections_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="breed_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="categories_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="error_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="risky_events_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,186 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_lines_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="json_bytes_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

View File

@@ -0,0 +1,205 @@
<!DOCTYPE html>
<html lang="en"><head>
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="nDPId RRD Graph">
<meta name="author" content="Toni Uhlig">
<link rel="icon" href="https://getbootstrap.com/docs/4.0/assets/img/favicons/favicon.ico">
<title>nDPId Dashboard</title>
<link rel="canonical" href="https://getbootstrap.com/docs/4.0/examples/dashboard/">
<!-- Bootstrap core CSS -->
<link href="bootstrap.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="dashboard.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0">
<a class="navbar-brand col-sm-3 col-md-2 mr-0" href="https://github.com/utoni/nDPId">nDPId Collectd RRD Graph</a>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-2 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<h6 class="sidebar-heading d-flex justify-content-between align-items-center px-3 mt-4 mb-1 text-muted">
<span>Graphs</span>
</h6>
<ul class="nav flex-column mb-2">
<li class="nav-item">
<a class="nav-link" href="index.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="flows.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line><polyline points="10 9 9 9 8 9"></polyline>
</svg>
Flows
</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="other.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Other
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="detections.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Detections
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="categories.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Categories
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="jsons.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
JSONs
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="events.html">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file-text">
<path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"></path>
<polyline points="14 2 14 8 20 8"></polyline>
<line x1="16" y1="13" x2="8" y2="13"></line>
<line x1="16" y1="17" x2="8" y2="17"></line>
<polyline points="10 9 9 9 8 9"></polyline>
</svg>
Events
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 pt-3 px-4">
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="traffic_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer3_past_year.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_hour.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_12hours.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_day.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_week.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_month.png" class="img-fluid" alt="Responsive image">
</div>
<div class="d-flex justify-content-center flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
<img src="layer4_past_year.png" class="img-fluid" alt="Responsive image">
</div>
</main>
</div>
</div>
<!-- Bootstrap core JavaScript
================================================== -->
<!-- Placed at the end of the document so the pages load faster -->
<script src="jquery-3.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="../../assets/js/vendor/jquery-slim.min.js"><\/script>')</script>
<script src="popper.js"></script>
<script src="bootstrap.js"></script>
<!-- Icons -->
<script src="feather.js"></script>
<script>
feather.replace()
</script>
</body></html>

File diff suppressed because one or more lines are too long

View File

@@ -115,6 +115,9 @@ int main(void)
{
if (i % 2 == 1)
{
#ifdef JSMN_PARENT_LINKS
printf("[%d][%d]", i, tokens[i].parent);
#endif
printf("[%.*s : ", tokens[i].end - tokens[i].start, (char *)(buf + json_start) + tokens[i].start);
}
else

View File

@@ -0,0 +1,267 @@
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "nDPIsrvd.h"
static int main_thread_shutdown = 0;
static struct nDPIsrvd_socket * sock = NULL;
#ifdef ENABLE_MEMORY_PROFILING
void nDPIsrvd_memprof_log_alloc(size_t alloc_size)
{
(void)alloc_size;
}
void nDPIsrvd_memprof_log_free(size_t free_size)
{
(void)free_size;
}
void nDPIsrvd_memprof_log(char const * const format, ...)
{
va_list ap;
va_start(ap, format);
fprintf(stderr, "%s", "nDPIsrvd MemoryProfiler: ");
vfprintf(stderr, format, ap);
fprintf(stderr, "%s\n", "");
va_end(ap);
}
#endif
static void nDPIsrvd_write_flow_info_cb(struct nDPIsrvd_socket const * sock,
struct nDPIsrvd_instance const * instance,
struct nDPIsrvd_thread_data const * thread_data,
struct nDPIsrvd_flow const * flow,
void * user_data)
{
(void)sock;
(void)instance;
(void)user_data;
fprintf(stderr,
"[Thread %2d][Flow %5llu][ptr: "
#ifdef __LP64__
"0x%016llx"
#else
"0x%08lx"
#endif
"][last-seen: %13llu][idle-time: %7llu][time-until-timeout: %7llu]\n",
flow->thread_id,
flow->id_as_ull,
#ifdef __LP64__
(unsigned long long int)flow,
#else
(unsigned long int)flow,
#endif
flow->last_seen,
flow->idle_time,
(flow->last_seen + flow->idle_time >= thread_data->most_recent_flow_time
? flow->last_seen + flow->idle_time - thread_data->most_recent_flow_time
: 0));
}
static void nDPIsrvd_verify_flows_cb(struct nDPIsrvd_thread_data const * const thread_data,
struct nDPIsrvd_flow const * const flow,
void * user_data)
{
(void)user_data;
if (thread_data != NULL)
{
if (flow->last_seen + flow->idle_time >= thread_data->most_recent_flow_time)
{
fprintf(stderr,
"Thread %d / %d, Flow %llu verification failed\n",
thread_data->thread_key,
flow->thread_id,
flow->id_as_ull);
}
else
{
fprintf(stderr,
"Thread %d / %d, Flow %llu verification failed, diff: %llu\n",
thread_data->thread_key,
flow->thread_id,
flow->id_as_ull,
thread_data->most_recent_flow_time - flow->last_seen + flow->idle_time);
}
}
else
{
fprintf(stderr, "Thread [UNKNOWN], Flow %llu verification failed\n", flow->id_as_ull);
}
}
static void sighandler(int signum)
{
struct nDPIsrvd_instance * current_instance;
struct nDPIsrvd_instance * itmp;
int verification_failed = 0;
if (signum == SIGUSR1)
{
nDPIsrvd_flow_info(sock, nDPIsrvd_write_flow_info_cb, NULL);
HASH_ITER(hh, sock->instance_table, current_instance, itmp)
{
if (nDPIsrvd_verify_flows(current_instance, nDPIsrvd_verify_flows_cb, NULL) != 0)
{
fprintf(stderr, "Flow verification failed for instance %d\n", current_instance->alias_source_key);
verification_failed = 1;
}
}
if (verification_failed == 0)
{
fprintf(stderr, "%s\n", "Flow verification succeeded.");
}
else
{
/* FATAL! */
exit(EXIT_FAILURE);
}
}
else if (main_thread_shutdown == 0)
{
main_thread_shutdown = 1;
}
}
static enum nDPIsrvd_callback_return simple_json_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow)
{
(void)sock;
(void)thread_data;
if (flow == NULL)
{
return CALLBACK_OK;
}
struct nDPIsrvd_json_token const * const alias = TOKEN_GET_SZ(sock, "alias");
struct nDPIsrvd_json_token const * const source = TOKEN_GET_SZ(sock, "source");
if (alias == NULL || source == NULL)
{
return CALLBACK_ERROR;
}
struct nDPIsrvd_json_token const * const flow_event_name = TOKEN_GET_SZ(sock, "flow_event_name");
if (TOKEN_VALUE_EQUALS_SZ(sock, flow_event_name, "new") != 0)
{
printf("Instance %.*s/%.*s (HT-Key: 0x%x), Thread %d, Flow %llu new\n",
nDPIsrvd_get_token_size(sock, alias),
nDPIsrvd_get_token_value(sock, alias),
nDPIsrvd_get_token_size(sock, source),
nDPIsrvd_get_token_value(sock, source),
instance->alias_source_key,
flow->thread_id,
flow->id_as_ull);
}
return CALLBACK_OK;
}
static void simple_flow_cleanup_callback(struct nDPIsrvd_socket * const sock,
struct nDPIsrvd_instance * const instance,
struct nDPIsrvd_thread_data * const thread_data,
struct nDPIsrvd_flow * const flow,
enum nDPIsrvd_cleanup_reason reason)
{
(void)sock;
(void)thread_data;
struct nDPIsrvd_json_token const * const alias = TOKEN_GET_SZ(sock, "alias");
struct nDPIsrvd_json_token const * const source = TOKEN_GET_SZ(sock, "source");
if (alias == NULL || source == NULL)
{
/* FATAL! */
fprintf(stderr, "BUG: Missing JSON token alias/source.\n");
exit(EXIT_FAILURE);
}
char const * const reason_str = nDPIsrvd_enum_to_string(reason);
printf("Instance %.*s/%.*s (HT-Key: 0x%x), Thread %d, Flow %llu cleanup, reason: %s\n",
nDPIsrvd_get_token_size(sock, alias),
nDPIsrvd_get_token_value(sock, alias),
nDPIsrvd_get_token_size(sock, source),
nDPIsrvd_get_token_value(sock, source),
instance->alias_source_key,
flow->thread_id,
flow->id_as_ull,
(reason_str != NULL ? reason_str : "UNKNOWN"));
if (reason == CLEANUP_REASON_FLOW_TIMEOUT)
{
/* FATAL! */
fprintf(stderr, "Flow %llu timeouted.\n", flow->id_as_ull);
exit(EXIT_FAILURE);
}
}
int main(int argc, char ** argv)
{
signal(SIGUSR1, sighandler);
signal(SIGINT, sighandler);
signal(SIGTERM, sighandler);
signal(SIGPIPE, sighandler);
sock = nDPIsrvd_socket_init(0, 0, 0, 0, simple_json_callback, NULL, simple_flow_cleanup_callback);
if (sock == NULL)
{
return 1;
}
if (nDPIsrvd_setup_address(&sock->address, (argc > 1 ? argv[1] : "127.0.0.1:7000")) != 0)
{
return 1;
}
if (nDPIsrvd_connect(sock) != CONNECT_OK)
{
nDPIsrvd_socket_free(&sock);
return 1;
}
if (nDPIsrvd_set_read_timeout(sock, 3, 0) != 0)
{
return 1;
}
enum nDPIsrvd_read_return read_ret = READ_OK;
while (main_thread_shutdown == 0)
{
read_ret = nDPIsrvd_read(sock);
if (errno == EINTR)
{
continue;
}
if (read_ret == READ_TIMEOUT)
{
printf("No data received during the last %llu second(s).\n",
(long long unsigned int)sock->read_timeout.tv_sec);
continue;
}
if (read_ret != READ_OK)
{
break;
}
enum nDPIsrvd_parse_return parse_ret = nDPIsrvd_parse_all(sock);
if (parse_ret != PARSE_NEED_MORE_DATA)
{
printf("Could not parse json string: %s\n", nDPIsrvd_enum_to_string(parse_ret));
break;
}
}
if (main_thread_shutdown == 0 && read_ret != READ_OK)
{
printf("Parse read %s\n", nDPIsrvd_enum_to_string(read_ret));
}
return 1;
}

View File

@@ -1,9 +0,0 @@
module github.com/lnslbrty/nDPId/examples/go-dashboard
go 1.14
require (
ui v0.0.0-00010101000000-000000000000
)
replace ui => ./ui

View File

@@ -1,218 +0,0 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"log"
"net"
"os"
"strconv"
"strings"
"ui"
)
var (
WarningLogger *log.Logger
InfoLogger *log.Logger
ErrorLogger *log.Logger
NETWORK_BUFFER_MAX_SIZE uint16 = 12288
NETWORK_BUFFER_LENGTH_DIGITS uint16 = 5
)
type packet_event struct {
ThreadID uint8 `json:"thread_id"`
PacketID uint64 `json:"packet_id"`
FlowID uint32 `json:"flow_id"`
FlowPacketID uint64 `json:"flow_packet_id"`
PacketEventID uint8 `json:"packet_event_id"`
PacketEventName string `json:"packet_event_name"`
PacketOversize bool `json:"pkt_oversize"`
PacketTimestampS uint64 `json:"pkt_ts_sec"`
PacketTimestampUs uint64 `json:"pkt_ts_usec"`
PacketLength uint32 `json:"pkt_len"`
PacketL4Length uint32 `json:"pkt_l4_len"`
Packet string `json:"pkt"`
PacketCaptureLength uint32 `json:"pkt_caplen"`
PacketType uint32 `json:"pkt_type"`
PacketL3Offset uint32 `json:"pkt_l3_offset"`
PacketL4Offset uint32 `json:"pkt_l4_offset"`
}
type flow_event struct {
ThreadID uint8 `json:"thread_id"`
PacketID uint64 `json:"packet_id"`
FlowID uint32 `json:"flow_id"`
FlowPacketID uint64 `json:"flow_packet_id"`
FlowFirstSeen uint64 `json:"flow_first_seen"`
FlowLastSeen uint64 `json:"flow_last_seen"`
FlowTotalLayer4DataLength uint64 `json:"flow_tot_l4_data_len"`
FlowMinLayer4DataLength uint64 `json:"flow_min_l4_data_len"`
FlowMaxLayer4DataLength uint64 `json:"flow_max_l4_data_len"`
FlowAvgLayer4DataLength uint64 `json:"flow_avg_l4_data_len"`
FlowDatalinkLayer uint8 `json:"flow_datalink"`
MaxPackets uint8 `json:"flow_max_packets"`
IsMidstreamFlow uint32 `json:"midstream"`
}
type basic_event struct {
ThreadID uint8 `json:"thread_id"`
PacketID uint64 `json:"packet_id"`
BasicEventID uint8 `json:"basic_event_id"`
BasicEventName string `json:"basic_event_name"`
}
func processJson(jsonStr string) {
jsonMap := make(map[string]interface{})
err := json.Unmarshal([]byte(jsonStr), &jsonMap)
if err != nil {
ErrorLogger.Printf("BUG: JSON error: %v\n", err)
os.Exit(1)
}
if jsonMap["packet_event_id"] != nil {
pe := packet_event{}
if err := json.Unmarshal([]byte(jsonStr), &pe); err != nil {
ErrorLogger.Printf("BUG: JSON Unmarshal error: %v\n", err)
os.Exit(1)
}
InfoLogger.Printf("PACKET EVENT %v\n", pe)
} else if jsonMap["flow_event_id"] != nil {
fe := flow_event{}
if err := json.Unmarshal([]byte(jsonStr), &fe); err != nil {
ErrorLogger.Printf("BUG: JSON Unmarshal error: %v\n", err)
os.Exit(1)
}
InfoLogger.Printf("FLOW EVENT %v\n", fe)
} else if jsonMap["basic_event_id"] != nil {
be := basic_event{}
if err := json.Unmarshal([]byte(jsonStr), &be); err != nil {
ErrorLogger.Printf("BUG: JSON Unmarshal error: %v\n", err)
os.Exit(1)
}
InfoLogger.Printf("BASIC EVENT %v\n", be)
} else {
ErrorLogger.Printf("BUG: Unknown JSON: %v\n", jsonStr)
os.Exit(1)
}
//InfoLogger.Printf("JSON map: %v\n-------------------------------------------------------\n", jsonMap)
}
func eventHandler(ui *ui.Tui, wdgts *ui.Widgets, reader chan string) {
for {
select {
case <-ui.MainTicker.C:
if err := wdgts.RawJson.Write(fmt.Sprintf("%s\n", "--- HEARTBEAT ---")); err != nil {
panic(err)
}
case <-ui.Context.Done():
return
case jsonStr := <-reader:
if err := wdgts.RawJson.Write(fmt.Sprintf("%s\n", jsonStr)); err != nil {
panic(err)
}
}
}
}
func main() {
InfoLogger = log.New(os.Stderr, "INFO: ", log.Ldate|log.Ltime|log.Lshortfile)
WarningLogger = log.New(os.Stderr, "WARNING: ", log.Ldate|log.Ltime|log.Lshortfile)
ErrorLogger = log.New(os.Stderr, "ERROR: ", log.Ldate|log.Ltime|log.Lshortfile)
writer := make(chan string, 256)
go func(writer chan string) {
con, err := net.Dial("tcp", "127.0.0.1:7000")
if err != nil {
ErrorLogger.Printf("Connection failed: %v\n", err)
os.Exit(1)
}
buf := make([]byte, NETWORK_BUFFER_MAX_SIZE)
jsonStr := string("")
jsonStrLen := uint16(0)
jsonLen := uint16(0)
brd := bufio.NewReaderSize(con, int(NETWORK_BUFFER_MAX_SIZE))
for {
nread, err := brd.Read(buf)
if err != nil {
if err != io.EOF {
ErrorLogger.Printf("Read Error: %v\n", err)
break
}
}
if nread == 0 || err == io.EOF {
WarningLogger.Printf("Disconnect from Server\n")
break
}
jsonStr += string(buf[:nread])
jsonStrLen += uint16(nread)
for {
if jsonStrLen < NETWORK_BUFFER_LENGTH_DIGITS+1 {
break
}
if jsonStr[NETWORK_BUFFER_LENGTH_DIGITS] != '{' {
ErrorLogger.Printf("BUG: JSON invalid opening character at position %d: '%s' (%x)\n",
NETWORK_BUFFER_LENGTH_DIGITS,
string(jsonStr[:NETWORK_BUFFER_LENGTH_DIGITS]), jsonStr[NETWORK_BUFFER_LENGTH_DIGITS])
os.Exit(1)
}
if jsonLen == 0 {
var tmp uint64
if tmp, err = strconv.ParseUint(strings.TrimLeft(jsonStr[:NETWORK_BUFFER_LENGTH_DIGITS], "0"), 10, 16); err != nil {
ErrorLogger.Printf("BUG: Could not parse length of a JSON string: %v\n", err)
os.Exit(1)
} else {
jsonLen = uint16(tmp)
}
}
if jsonStrLen < jsonLen+NETWORK_BUFFER_LENGTH_DIGITS {
break
}
if jsonStr[jsonLen+NETWORK_BUFFER_LENGTH_DIGITS-2] != '}' || jsonStr[jsonLen+NETWORK_BUFFER_LENGTH_DIGITS-1] != '\n' {
ErrorLogger.Printf("BUG: JSON invalid closing character at position %d: '%s'\n",
jsonLen+NETWORK_BUFFER_LENGTH_DIGITS,
string(jsonStr[jsonLen+NETWORK_BUFFER_LENGTH_DIGITS-1]))
os.Exit(1)
}
writer <- jsonStr[NETWORK_BUFFER_LENGTH_DIGITS : NETWORK_BUFFER_LENGTH_DIGITS+jsonLen]
jsonStr = jsonStr[jsonLen+NETWORK_BUFFER_LENGTH_DIGITS:]
jsonStrLen -= (jsonLen + NETWORK_BUFFER_LENGTH_DIGITS)
jsonLen = 0
}
}
}(writer)
tui, wdgts := ui.Init()
go eventHandler(tui, wdgts, writer)
ui.Run(tui)
/*
for {
select {
case _ = <-writer:
break
}
}
*/
}

View File

@@ -1,8 +0,0 @@
module github.com/lnslbrty/nDPId/examples/go-dashboard/ui
go 1.14
require (
github.com/mum4k/termdash v0.12.3-0.20200901030524-fe3e97353191
github.com/nsf/termbox-go v0.0.0-20200418040025-38ba6e5628f1 // indirect
)

View File

@@ -1,104 +0,0 @@
package ui
import (
"context"
"time"
"github.com/mum4k/termdash"
"github.com/mum4k/termdash/container"
"github.com/mum4k/termdash/keyboard"
"github.com/mum4k/termdash/linestyle"
"github.com/mum4k/termdash/terminal/termbox"
"github.com/mum4k/termdash/terminal/terminalapi"
"github.com/mum4k/termdash/widgets/text"
)
const rootID = "root"
const redrawInterval = 250 * time.Millisecond
type Tui struct {
Term terminalapi.Terminal
Context context.Context
Cancel context.CancelFunc
Container *container.Container
MainTicker *time.Ticker
}
type Widgets struct {
Menu *text.Text
RawJson *text.Text
}
func newWidgets(ctx context.Context) (*Widgets, error) {
menu, err := text.New()
if err != nil {
panic(err)
}
rawJson, err := text.New(text.RollContent(), text.WrapAtWords())
if err != nil {
panic(err)
}
return &Widgets{
Menu: menu,
RawJson: rawJson,
}, nil
}
func Init() (*Tui, *Widgets) {
var err error
ui := Tui{}
ui.Term, err = termbox.New(termbox.ColorMode(terminalapi.ColorMode256))
if err != nil {
panic(err)
}
ui.Context, ui.Cancel = context.WithCancel(context.Background())
wdgts, err := newWidgets(ui.Context)
if err != nil {
panic(err)
}
ui.Container, err = container.New(ui.Term,
container.Border(linestyle.None),
container.BorderTitle("[ESC to Quit]"),
container.SplitHorizontal(
container.Top(
container.Border(linestyle.Light),
container.BorderTitle("Go nDPId Dashboard"),
container.PlaceWidget(wdgts.Menu),
),
container.Bottom(
container.Border(linestyle.Light),
container.BorderTitle("Raw JSON"),
container.PlaceWidget(wdgts.RawJson),
),
container.SplitFixed(3),
),
)
if err != nil {
panic(err)
}
ui.MainTicker = time.NewTicker(1 * time.Second)
return &ui, wdgts
}
func Run(ui *Tui) {
defer ui.Term.Close()
quitter := func(k *terminalapi.Keyboard) {
if k.Key == keyboard.KeyEsc || k.Key == keyboard.KeyCtrlC {
ui.Cancel()
}
}
if err := termdash.Run(ui.Context, ui.Term, ui.Container, termdash.KeyboardSubscriber(quitter), termdash.RedrawInterval(redrawInterval)); err != nil {
panic(err)
}
}

View File

@@ -1,16 +0,0 @@
language: go
sudo: false
go:
- 1.13.x
- tip
before_install:
- go get -t -v ./...
script:
- go generate
- git diff --cached --exit-code
- ./go.test.sh
after_success:
- bash <(curl -s https://codecov.io/bash)

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2016 Yasuhiro Matsumoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,27 +0,0 @@
go-runewidth
============
[![Build Status](https://travis-ci.org/mattn/go-runewidth.png?branch=master)](https://travis-ci.org/mattn/go-runewidth)
[![Codecov](https://codecov.io/gh/mattn/go-runewidth/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-runewidth)
[![GoDoc](https://godoc.org/github.com/mattn/go-runewidth?status.svg)](http://godoc.org/github.com/mattn/go-runewidth)
[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-runewidth)](https://goreportcard.com/report/github.com/mattn/go-runewidth)
Provides functions to get fixed width of the character or string.
Usage
-----
```go
runewidth.StringWidth("つのだ☆HIRO") == 12
```
Author
------
Yasuhiro Matsumoto
License
-------
under the MIT License: http://mattn.mit-license.org/2013

View File

@@ -1,3 +0,0 @@
module github.com/mattn/go-runewidth
go 1.9

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env bash
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
go test -race -coverprofile=profile.out -covermode=atomic "$d"
if [ -f profile.out ]; then
cat profile.out >> coverage.txt
rm profile.out
fi
done

View File

@@ -1,257 +0,0 @@
package runewidth
import (
"os"
)
//go:generate go run script/generate.go
var (
// EastAsianWidth will be set true if the current locale is CJK
EastAsianWidth bool
// ZeroWidthJoiner is flag to set to use UTR#51 ZWJ
ZeroWidthJoiner bool
// DefaultCondition is a condition in current locale
DefaultCondition = &Condition{}
)
func init() {
handleEnv()
}
func handleEnv() {
env := os.Getenv("RUNEWIDTH_EASTASIAN")
if env == "" {
EastAsianWidth = IsEastAsian()
} else {
EastAsianWidth = env == "1"
}
// update DefaultCondition
DefaultCondition.EastAsianWidth = EastAsianWidth
DefaultCondition.ZeroWidthJoiner = ZeroWidthJoiner
}
type interval struct {
first rune
last rune
}
type table []interval
func inTables(r rune, ts ...table) bool {
for _, t := range ts {
if inTable(r, t) {
return true
}
}
return false
}
func inTable(r rune, t table) bool {
if r < t[0].first {
return false
}
bot := 0
top := len(t) - 1
for top >= bot {
mid := (bot + top) >> 1
switch {
case t[mid].last < r:
bot = mid + 1
case t[mid].first > r:
top = mid - 1
default:
return true
}
}
return false
}
var private = table{
{0x00E000, 0x00F8FF}, {0x0F0000, 0x0FFFFD}, {0x100000, 0x10FFFD},
}
var nonprint = table{
{0x0000, 0x001F}, {0x007F, 0x009F}, {0x00AD, 0x00AD},
{0x070F, 0x070F}, {0x180B, 0x180E}, {0x200B, 0x200F},
{0x2028, 0x202E}, {0x206A, 0x206F}, {0xD800, 0xDFFF},
{0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFB}, {0xFFFE, 0xFFFF},
}
// Condition have flag EastAsianWidth whether the current locale is CJK or not.
type Condition struct {
EastAsianWidth bool
ZeroWidthJoiner bool
}
// NewCondition return new instance of Condition which is current locale.
func NewCondition() *Condition {
return &Condition{
EastAsianWidth: EastAsianWidth,
ZeroWidthJoiner: ZeroWidthJoiner,
}
}
// RuneWidth returns the number of cells in r.
// See http://www.unicode.org/reports/tr11/
func (c *Condition) RuneWidth(r rune) int {
switch {
case r < 0 || r > 0x10FFFF || inTables(r, nonprint, combining, notassigned):
return 0
case (c.EastAsianWidth && IsAmbiguousWidth(r)) || inTables(r, doublewidth):
return 2
default:
return 1
}
}
func (c *Condition) stringWidth(s string) (width int) {
for _, r := range []rune(s) {
width += c.RuneWidth(r)
}
return width
}
func (c *Condition) stringWidthZeroJoiner(s string) (width int) {
r1, r2 := rune(0), rune(0)
for _, r := range []rune(s) {
if r == 0xFE0E || r == 0xFE0F {
continue
}
w := c.RuneWidth(r)
if r2 == 0x200D && inTables(r, emoji) && inTables(r1, emoji) {
if width < w {
width = w
}
} else {
width += w
}
r1, r2 = r2, r
}
return width
}
// StringWidth return width as you can see
func (c *Condition) StringWidth(s string) (width int) {
if c.ZeroWidthJoiner {
return c.stringWidthZeroJoiner(s)
}
return c.stringWidth(s)
}
// Truncate return string truncated with w cells
func (c *Condition) Truncate(s string, w int, tail string) string {
if c.StringWidth(s) <= w {
return s
}
r := []rune(s)
tw := c.StringWidth(tail)
w -= tw
width := 0
i := 0
for ; i < len(r); i++ {
cw := c.RuneWidth(r[i])
if width+cw > w {
break
}
width += cw
}
return string(r[0:i]) + tail
}
// Wrap return string wrapped with w cells
func (c *Condition) Wrap(s string, w int) string {
width := 0
out := ""
for _, r := range []rune(s) {
cw := RuneWidth(r)
if r == '\n' {
out += string(r)
width = 0
continue
} else if width+cw > w {
out += "\n"
width = 0
out += string(r)
width += cw
continue
}
out += string(r)
width += cw
}
return out
}
// FillLeft return string filled in left by spaces in w cells
func (c *Condition) FillLeft(s string, w int) string {
width := c.StringWidth(s)
count := w - width
if count > 0 {
b := make([]byte, count)
for i := range b {
b[i] = ' '
}
return string(b) + s
}
return s
}
// FillRight return string filled in left by spaces in w cells
func (c *Condition) FillRight(s string, w int) string {
width := c.StringWidth(s)
count := w - width
if count > 0 {
b := make([]byte, count)
for i := range b {
b[i] = ' '
}
return s + string(b)
}
return s
}
// RuneWidth returns the number of cells in r.
// See http://www.unicode.org/reports/tr11/
func RuneWidth(r rune) int {
return DefaultCondition.RuneWidth(r)
}
// IsAmbiguousWidth returns whether is ambiguous width or not.
func IsAmbiguousWidth(r rune) bool {
return inTables(r, private, ambiguous)
}
// IsNeutralWidth returns whether is neutral width or not.
func IsNeutralWidth(r rune) bool {
return inTable(r, neutral)
}
// StringWidth return width as you can see
func StringWidth(s string) (width int) {
return DefaultCondition.StringWidth(s)
}
// Truncate return string truncated with w cells
func Truncate(s string, w int, tail string) string {
return DefaultCondition.Truncate(s, w, tail)
}
// Wrap return string wrapped with w cells
func Wrap(s string, w int) string {
return DefaultCondition.Wrap(s, w)
}
// FillLeft return string filled in left by spaces in w cells
func FillLeft(s string, w int) string {
return DefaultCondition.FillLeft(s, w)
}
// FillRight return string filled in left by spaces in w cells
func FillRight(s string, w int) string {
return DefaultCondition.FillRight(s, w)
}

View File

@@ -1,8 +0,0 @@
// +build appengine
package runewidth
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
return false
}

View File

@@ -1,9 +0,0 @@
// +build js
// +build !appengine
package runewidth
func IsEastAsian() bool {
// TODO: Implement this for the web. Detect east asian in a compatible way, and return true.
return false
}

View File

@@ -1,82 +0,0 @@
// +build !windows
// +build !js
// +build !appengine
package runewidth
import (
"os"
"regexp"
"strings"
)
var reLoc = regexp.MustCompile(`^[a-z][a-z][a-z]?(?:_[A-Z][A-Z])?\.(.+)`)
var mblenTable = map[string]int{
"utf-8": 6,
"utf8": 6,
"jis": 8,
"eucjp": 3,
"euckr": 2,
"euccn": 2,
"sjis": 2,
"cp932": 2,
"cp51932": 2,
"cp936": 2,
"cp949": 2,
"cp950": 2,
"big5": 2,
"gbk": 2,
"gb2312": 2,
}
func isEastAsian(locale string) bool {
charset := strings.ToLower(locale)
r := reLoc.FindStringSubmatch(locale)
if len(r) == 2 {
charset = strings.ToLower(r[1])
}
if strings.HasSuffix(charset, "@cjk_narrow") {
return false
}
for pos, b := range []byte(charset) {
if b == '@' {
charset = charset[:pos]
break
}
}
max := 1
if m, ok := mblenTable[charset]; ok {
max = m
}
if max > 1 && (charset[0] != 'u' ||
strings.HasPrefix(locale, "ja") ||
strings.HasPrefix(locale, "ko") ||
strings.HasPrefix(locale, "zh")) {
return true
}
return false
}
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
locale := os.Getenv("LC_ALL")
if locale == "" {
locale = os.Getenv("LC_CTYPE")
}
if locale == "" {
locale = os.Getenv("LANG")
}
// ignore C locale
if locale == "POSIX" || locale == "C" {
return false
}
if len(locale) > 1 && locale[0] == 'C' && (locale[1] == '.' || locale[1] == '-') {
return false
}
return isEastAsian(locale)
}

View File

@@ -1,437 +0,0 @@
// Code generated by script/generate.go. DO NOT EDIT.
package runewidth
var combining = table{
{0x0300, 0x036F}, {0x0483, 0x0489}, {0x07EB, 0x07F3},
{0x0C00, 0x0C00}, {0x0C04, 0x0C04}, {0x0D00, 0x0D01},
{0x135D, 0x135F}, {0x1A7F, 0x1A7F}, {0x1AB0, 0x1AC0},
{0x1B6B, 0x1B73}, {0x1DC0, 0x1DF9}, {0x1DFB, 0x1DFF},
{0x20D0, 0x20F0}, {0x2CEF, 0x2CF1}, {0x2DE0, 0x2DFF},
{0x3099, 0x309A}, {0xA66F, 0xA672}, {0xA674, 0xA67D},
{0xA69E, 0xA69F}, {0xA6F0, 0xA6F1}, {0xA8E0, 0xA8F1},
{0xFE20, 0xFE2F}, {0x101FD, 0x101FD}, {0x10376, 0x1037A},
{0x10EAB, 0x10EAC}, {0x10F46, 0x10F50}, {0x11300, 0x11301},
{0x1133B, 0x1133C}, {0x11366, 0x1136C}, {0x11370, 0x11374},
{0x16AF0, 0x16AF4}, {0x1D165, 0x1D169}, {0x1D16D, 0x1D172},
{0x1D17B, 0x1D182}, {0x1D185, 0x1D18B}, {0x1D1AA, 0x1D1AD},
{0x1D242, 0x1D244}, {0x1E000, 0x1E006}, {0x1E008, 0x1E018},
{0x1E01B, 0x1E021}, {0x1E023, 0x1E024}, {0x1E026, 0x1E02A},
{0x1E8D0, 0x1E8D6},
}
var doublewidth = table{
{0x1100, 0x115F}, {0x231A, 0x231B}, {0x2329, 0x232A},
{0x23E9, 0x23EC}, {0x23F0, 0x23F0}, {0x23F3, 0x23F3},
{0x25FD, 0x25FE}, {0x2614, 0x2615}, {0x2648, 0x2653},
{0x267F, 0x267F}, {0x2693, 0x2693}, {0x26A1, 0x26A1},
{0x26AA, 0x26AB}, {0x26BD, 0x26BE}, {0x26C4, 0x26C5},
{0x26CE, 0x26CE}, {0x26D4, 0x26D4}, {0x26EA, 0x26EA},
{0x26F2, 0x26F3}, {0x26F5, 0x26F5}, {0x26FA, 0x26FA},
{0x26FD, 0x26FD}, {0x2705, 0x2705}, {0x270A, 0x270B},
{0x2728, 0x2728}, {0x274C, 0x274C}, {0x274E, 0x274E},
{0x2753, 0x2755}, {0x2757, 0x2757}, {0x2795, 0x2797},
{0x27B0, 0x27B0}, {0x27BF, 0x27BF}, {0x2B1B, 0x2B1C},
{0x2B50, 0x2B50}, {0x2B55, 0x2B55}, {0x2E80, 0x2E99},
{0x2E9B, 0x2EF3}, {0x2F00, 0x2FD5}, {0x2FF0, 0x2FFB},
{0x3000, 0x303E}, {0x3041, 0x3096}, {0x3099, 0x30FF},
{0x3105, 0x312F}, {0x3131, 0x318E}, {0x3190, 0x31E3},
{0x31F0, 0x321E}, {0x3220, 0x3247}, {0x3250, 0x4DBF},
{0x4E00, 0xA48C}, {0xA490, 0xA4C6}, {0xA960, 0xA97C},
{0xAC00, 0xD7A3}, {0xF900, 0xFAFF}, {0xFE10, 0xFE19},
{0xFE30, 0xFE52}, {0xFE54, 0xFE66}, {0xFE68, 0xFE6B},
{0xFF01, 0xFF60}, {0xFFE0, 0xFFE6}, {0x16FE0, 0x16FE4},
{0x16FF0, 0x16FF1}, {0x17000, 0x187F7}, {0x18800, 0x18CD5},
{0x18D00, 0x18D08}, {0x1B000, 0x1B11E}, {0x1B150, 0x1B152},
{0x1B164, 0x1B167}, {0x1B170, 0x1B2FB}, {0x1F004, 0x1F004},
{0x1F0CF, 0x1F0CF}, {0x1F18E, 0x1F18E}, {0x1F191, 0x1F19A},
{0x1F200, 0x1F202}, {0x1F210, 0x1F23B}, {0x1F240, 0x1F248},
{0x1F250, 0x1F251}, {0x1F260, 0x1F265}, {0x1F300, 0x1F320},
{0x1F32D, 0x1F335}, {0x1F337, 0x1F37C}, {0x1F37E, 0x1F393},
{0x1F3A0, 0x1F3CA}, {0x1F3CF, 0x1F3D3}, {0x1F3E0, 0x1F3F0},
{0x1F3F4, 0x1F3F4}, {0x1F3F8, 0x1F43E}, {0x1F440, 0x1F440},
{0x1F442, 0x1F4FC}, {0x1F4FF, 0x1F53D}, {0x1F54B, 0x1F54E},
{0x1F550, 0x1F567}, {0x1F57A, 0x1F57A}, {0x1F595, 0x1F596},
{0x1F5A4, 0x1F5A4}, {0x1F5FB, 0x1F64F}, {0x1F680, 0x1F6C5},
{0x1F6CC, 0x1F6CC}, {0x1F6D0, 0x1F6D2}, {0x1F6D5, 0x1F6D7},
{0x1F6EB, 0x1F6EC}, {0x1F6F4, 0x1F6FC}, {0x1F7E0, 0x1F7EB},
{0x1F90C, 0x1F93A}, {0x1F93C, 0x1F945}, {0x1F947, 0x1F978},
{0x1F97A, 0x1F9CB}, {0x1F9CD, 0x1F9FF}, {0x1FA70, 0x1FA74},
{0x1FA78, 0x1FA7A}, {0x1FA80, 0x1FA86}, {0x1FA90, 0x1FAA8},
{0x1FAB0, 0x1FAB6}, {0x1FAC0, 0x1FAC2}, {0x1FAD0, 0x1FAD6},
{0x20000, 0x2FFFD}, {0x30000, 0x3FFFD},
}
var ambiguous = table{
{0x00A1, 0x00A1}, {0x00A4, 0x00A4}, {0x00A7, 0x00A8},
{0x00AA, 0x00AA}, {0x00AD, 0x00AE}, {0x00B0, 0x00B4},
{0x00B6, 0x00BA}, {0x00BC, 0x00BF}, {0x00C6, 0x00C6},
{0x00D0, 0x00D0}, {0x00D7, 0x00D8}, {0x00DE, 0x00E1},
{0x00E6, 0x00E6}, {0x00E8, 0x00EA}, {0x00EC, 0x00ED},
{0x00F0, 0x00F0}, {0x00F2, 0x00F3}, {0x00F7, 0x00FA},
{0x00FC, 0x00FC}, {0x00FE, 0x00FE}, {0x0101, 0x0101},
{0x0111, 0x0111}, {0x0113, 0x0113}, {0x011B, 0x011B},
{0x0126, 0x0127}, {0x012B, 0x012B}, {0x0131, 0x0133},
{0x0138, 0x0138}, {0x013F, 0x0142}, {0x0144, 0x0144},
{0x0148, 0x014B}, {0x014D, 0x014D}, {0x0152, 0x0153},
{0x0166, 0x0167}, {0x016B, 0x016B}, {0x01CE, 0x01CE},
{0x01D0, 0x01D0}, {0x01D2, 0x01D2}, {0x01D4, 0x01D4},
{0x01D6, 0x01D6}, {0x01D8, 0x01D8}, {0x01DA, 0x01DA},
{0x01DC, 0x01DC}, {0x0251, 0x0251}, {0x0261, 0x0261},
{0x02C4, 0x02C4}, {0x02C7, 0x02C7}, {0x02C9, 0x02CB},
{0x02CD, 0x02CD}, {0x02D0, 0x02D0}, {0x02D8, 0x02DB},
{0x02DD, 0x02DD}, {0x02DF, 0x02DF}, {0x0300, 0x036F},
{0x0391, 0x03A1}, {0x03A3, 0x03A9}, {0x03B1, 0x03C1},
{0x03C3, 0x03C9}, {0x0401, 0x0401}, {0x0410, 0x044F},
{0x0451, 0x0451}, {0x2010, 0x2010}, {0x2013, 0x2016},
{0x2018, 0x2019}, {0x201C, 0x201D}, {0x2020, 0x2022},
{0x2024, 0x2027}, {0x2030, 0x2030}, {0x2032, 0x2033},
{0x2035, 0x2035}, {0x203B, 0x203B}, {0x203E, 0x203E},
{0x2074, 0x2074}, {0x207F, 0x207F}, {0x2081, 0x2084},
{0x20AC, 0x20AC}, {0x2103, 0x2103}, {0x2105, 0x2105},
{0x2109, 0x2109}, {0x2113, 0x2113}, {0x2116, 0x2116},
{0x2121, 0x2122}, {0x2126, 0x2126}, {0x212B, 0x212B},
{0x2153, 0x2154}, {0x215B, 0x215E}, {0x2160, 0x216B},
{0x2170, 0x2179}, {0x2189, 0x2189}, {0x2190, 0x2199},
{0x21B8, 0x21B9}, {0x21D2, 0x21D2}, {0x21D4, 0x21D4},
{0x21E7, 0x21E7}, {0x2200, 0x2200}, {0x2202, 0x2203},
{0x2207, 0x2208}, {0x220B, 0x220B}, {0x220F, 0x220F},
{0x2211, 0x2211}, {0x2215, 0x2215}, {0x221A, 0x221A},
{0x221D, 0x2220}, {0x2223, 0x2223}, {0x2225, 0x2225},
{0x2227, 0x222C}, {0x222E, 0x222E}, {0x2234, 0x2237},
{0x223C, 0x223D}, {0x2248, 0x2248}, {0x224C, 0x224C},
{0x2252, 0x2252}, {0x2260, 0x2261}, {0x2264, 0x2267},
{0x226A, 0x226B}, {0x226E, 0x226F}, {0x2282, 0x2283},
{0x2286, 0x2287}, {0x2295, 0x2295}, {0x2299, 0x2299},
{0x22A5, 0x22A5}, {0x22BF, 0x22BF}, {0x2312, 0x2312},
{0x2460, 0x24E9}, {0x24EB, 0x254B}, {0x2550, 0x2573},
{0x2580, 0x258F}, {0x2592, 0x2595}, {0x25A0, 0x25A1},
{0x25A3, 0x25A9}, {0x25B2, 0x25B3}, {0x25B6, 0x25B7},
{0x25BC, 0x25BD}, {0x25C0, 0x25C1}, {0x25C6, 0x25C8},
{0x25CB, 0x25CB}, {0x25CE, 0x25D1}, {0x25E2, 0x25E5},
{0x25EF, 0x25EF}, {0x2605, 0x2606}, {0x2609, 0x2609},
{0x260E, 0x260F}, {0x261C, 0x261C}, {0x261E, 0x261E},
{0x2640, 0x2640}, {0x2642, 0x2642}, {0x2660, 0x2661},
{0x2663, 0x2665}, {0x2667, 0x266A}, {0x266C, 0x266D},
{0x266F, 0x266F}, {0x269E, 0x269F}, {0x26BF, 0x26BF},
{0x26C6, 0x26CD}, {0x26CF, 0x26D3}, {0x26D5, 0x26E1},
{0x26E3, 0x26E3}, {0x26E8, 0x26E9}, {0x26EB, 0x26F1},
{0x26F4, 0x26F4}, {0x26F6, 0x26F9}, {0x26FB, 0x26FC},
{0x26FE, 0x26FF}, {0x273D, 0x273D}, {0x2776, 0x277F},
{0x2B56, 0x2B59}, {0x3248, 0x324F}, {0xE000, 0xF8FF},
{0xFE00, 0xFE0F}, {0xFFFD, 0xFFFD}, {0x1F100, 0x1F10A},
{0x1F110, 0x1F12D}, {0x1F130, 0x1F169}, {0x1F170, 0x1F18D},
{0x1F18F, 0x1F190}, {0x1F19B, 0x1F1AC}, {0xE0100, 0xE01EF},
{0xF0000, 0xFFFFD}, {0x100000, 0x10FFFD},
}
var notassigned = table{
{0x27E6, 0x27ED}, {0x2985, 0x2986},
}
var neutral = table{
{0x0000, 0x001F}, {0x007F, 0x00A0}, {0x00A9, 0x00A9},
{0x00AB, 0x00AB}, {0x00B5, 0x00B5}, {0x00BB, 0x00BB},
{0x00C0, 0x00C5}, {0x00C7, 0x00CF}, {0x00D1, 0x00D6},
{0x00D9, 0x00DD}, {0x00E2, 0x00E5}, {0x00E7, 0x00E7},
{0x00EB, 0x00EB}, {0x00EE, 0x00EF}, {0x00F1, 0x00F1},
{0x00F4, 0x00F6}, {0x00FB, 0x00FB}, {0x00FD, 0x00FD},
{0x00FF, 0x0100}, {0x0102, 0x0110}, {0x0112, 0x0112},
{0x0114, 0x011A}, {0x011C, 0x0125}, {0x0128, 0x012A},
{0x012C, 0x0130}, {0x0134, 0x0137}, {0x0139, 0x013E},
{0x0143, 0x0143}, {0x0145, 0x0147}, {0x014C, 0x014C},
{0x014E, 0x0151}, {0x0154, 0x0165}, {0x0168, 0x016A},
{0x016C, 0x01CD}, {0x01CF, 0x01CF}, {0x01D1, 0x01D1},
{0x01D3, 0x01D3}, {0x01D5, 0x01D5}, {0x01D7, 0x01D7},
{0x01D9, 0x01D9}, {0x01DB, 0x01DB}, {0x01DD, 0x0250},
{0x0252, 0x0260}, {0x0262, 0x02C3}, {0x02C5, 0x02C6},
{0x02C8, 0x02C8}, {0x02CC, 0x02CC}, {0x02CE, 0x02CF},
{0x02D1, 0x02D7}, {0x02DC, 0x02DC}, {0x02DE, 0x02DE},
{0x02E0, 0x02FF}, {0x0370, 0x0377}, {0x037A, 0x037F},
{0x0384, 0x038A}, {0x038C, 0x038C}, {0x038E, 0x0390},
{0x03AA, 0x03B0}, {0x03C2, 0x03C2}, {0x03CA, 0x0400},
{0x0402, 0x040F}, {0x0450, 0x0450}, {0x0452, 0x052F},
{0x0531, 0x0556}, {0x0559, 0x058A}, {0x058D, 0x058F},
{0x0591, 0x05C7}, {0x05D0, 0x05EA}, {0x05EF, 0x05F4},
{0x0600, 0x061C}, {0x061E, 0x070D}, {0x070F, 0x074A},
{0x074D, 0x07B1}, {0x07C0, 0x07FA}, {0x07FD, 0x082D},
{0x0830, 0x083E}, {0x0840, 0x085B}, {0x085E, 0x085E},
{0x0860, 0x086A}, {0x08A0, 0x08B4}, {0x08B6, 0x08C7},
{0x08D3, 0x0983}, {0x0985, 0x098C}, {0x098F, 0x0990},
{0x0993, 0x09A8}, {0x09AA, 0x09B0}, {0x09B2, 0x09B2},
{0x09B6, 0x09B9}, {0x09BC, 0x09C4}, {0x09C7, 0x09C8},
{0x09CB, 0x09CE}, {0x09D7, 0x09D7}, {0x09DC, 0x09DD},
{0x09DF, 0x09E3}, {0x09E6, 0x09FE}, {0x0A01, 0x0A03},
{0x0A05, 0x0A0A}, {0x0A0F, 0x0A10}, {0x0A13, 0x0A28},
{0x0A2A, 0x0A30}, {0x0A32, 0x0A33}, {0x0A35, 0x0A36},
{0x0A38, 0x0A39}, {0x0A3C, 0x0A3C}, {0x0A3E, 0x0A42},
{0x0A47, 0x0A48}, {0x0A4B, 0x0A4D}, {0x0A51, 0x0A51},
{0x0A59, 0x0A5C}, {0x0A5E, 0x0A5E}, {0x0A66, 0x0A76},
{0x0A81, 0x0A83}, {0x0A85, 0x0A8D}, {0x0A8F, 0x0A91},
{0x0A93, 0x0AA8}, {0x0AAA, 0x0AB0}, {0x0AB2, 0x0AB3},
{0x0AB5, 0x0AB9}, {0x0ABC, 0x0AC5}, {0x0AC7, 0x0AC9},
{0x0ACB, 0x0ACD}, {0x0AD0, 0x0AD0}, {0x0AE0, 0x0AE3},
{0x0AE6, 0x0AF1}, {0x0AF9, 0x0AFF}, {0x0B01, 0x0B03},
{0x0B05, 0x0B0C}, {0x0B0F, 0x0B10}, {0x0B13, 0x0B28},
{0x0B2A, 0x0B30}, {0x0B32, 0x0B33}, {0x0B35, 0x0B39},
{0x0B3C, 0x0B44}, {0x0B47, 0x0B48}, {0x0B4B, 0x0B4D},
{0x0B55, 0x0B57}, {0x0B5C, 0x0B5D}, {0x0B5F, 0x0B63},
{0x0B66, 0x0B77}, {0x0B82, 0x0B83}, {0x0B85, 0x0B8A},
{0x0B8E, 0x0B90}, {0x0B92, 0x0B95}, {0x0B99, 0x0B9A},
{0x0B9C, 0x0B9C}, {0x0B9E, 0x0B9F}, {0x0BA3, 0x0BA4},
{0x0BA8, 0x0BAA}, {0x0BAE, 0x0BB9}, {0x0BBE, 0x0BC2},
{0x0BC6, 0x0BC8}, {0x0BCA, 0x0BCD}, {0x0BD0, 0x0BD0},
{0x0BD7, 0x0BD7}, {0x0BE6, 0x0BFA}, {0x0C00, 0x0C0C},
{0x0C0E, 0x0C10}, {0x0C12, 0x0C28}, {0x0C2A, 0x0C39},
{0x0C3D, 0x0C44}, {0x0C46, 0x0C48}, {0x0C4A, 0x0C4D},
{0x0C55, 0x0C56}, {0x0C58, 0x0C5A}, {0x0C60, 0x0C63},
{0x0C66, 0x0C6F}, {0x0C77, 0x0C8C}, {0x0C8E, 0x0C90},
{0x0C92, 0x0CA8}, {0x0CAA, 0x0CB3}, {0x0CB5, 0x0CB9},
{0x0CBC, 0x0CC4}, {0x0CC6, 0x0CC8}, {0x0CCA, 0x0CCD},
{0x0CD5, 0x0CD6}, {0x0CDE, 0x0CDE}, {0x0CE0, 0x0CE3},
{0x0CE6, 0x0CEF}, {0x0CF1, 0x0CF2}, {0x0D00, 0x0D0C},
{0x0D0E, 0x0D10}, {0x0D12, 0x0D44}, {0x0D46, 0x0D48},
{0x0D4A, 0x0D4F}, {0x0D54, 0x0D63}, {0x0D66, 0x0D7F},
{0x0D81, 0x0D83}, {0x0D85, 0x0D96}, {0x0D9A, 0x0DB1},
{0x0DB3, 0x0DBB}, {0x0DBD, 0x0DBD}, {0x0DC0, 0x0DC6},
{0x0DCA, 0x0DCA}, {0x0DCF, 0x0DD4}, {0x0DD6, 0x0DD6},
{0x0DD8, 0x0DDF}, {0x0DE6, 0x0DEF}, {0x0DF2, 0x0DF4},
{0x0E01, 0x0E3A}, {0x0E3F, 0x0E5B}, {0x0E81, 0x0E82},
{0x0E84, 0x0E84}, {0x0E86, 0x0E8A}, {0x0E8C, 0x0EA3},
{0x0EA5, 0x0EA5}, {0x0EA7, 0x0EBD}, {0x0EC0, 0x0EC4},
{0x0EC6, 0x0EC6}, {0x0EC8, 0x0ECD}, {0x0ED0, 0x0ED9},
{0x0EDC, 0x0EDF}, {0x0F00, 0x0F47}, {0x0F49, 0x0F6C},
{0x0F71, 0x0F97}, {0x0F99, 0x0FBC}, {0x0FBE, 0x0FCC},
{0x0FCE, 0x0FDA}, {0x1000, 0x10C5}, {0x10C7, 0x10C7},
{0x10CD, 0x10CD}, {0x10D0, 0x10FF}, {0x1160, 0x1248},
{0x124A, 0x124D}, {0x1250, 0x1256}, {0x1258, 0x1258},
{0x125A, 0x125D}, {0x1260, 0x1288}, {0x128A, 0x128D},
{0x1290, 0x12B0}, {0x12B2, 0x12B5}, {0x12B8, 0x12BE},
{0x12C0, 0x12C0}, {0x12C2, 0x12C5}, {0x12C8, 0x12D6},
{0x12D8, 0x1310}, {0x1312, 0x1315}, {0x1318, 0x135A},
{0x135D, 0x137C}, {0x1380, 0x1399}, {0x13A0, 0x13F5},
{0x13F8, 0x13FD}, {0x1400, 0x169C}, {0x16A0, 0x16F8},
{0x1700, 0x170C}, {0x170E, 0x1714}, {0x1720, 0x1736},
{0x1740, 0x1753}, {0x1760, 0x176C}, {0x176E, 0x1770},
{0x1772, 0x1773}, {0x1780, 0x17DD}, {0x17E0, 0x17E9},
{0x17F0, 0x17F9}, {0x1800, 0x180E}, {0x1810, 0x1819},
{0x1820, 0x1878}, {0x1880, 0x18AA}, {0x18B0, 0x18F5},
{0x1900, 0x191E}, {0x1920, 0x192B}, {0x1930, 0x193B},
{0x1940, 0x1940}, {0x1944, 0x196D}, {0x1970, 0x1974},
{0x1980, 0x19AB}, {0x19B0, 0x19C9}, {0x19D0, 0x19DA},
{0x19DE, 0x1A1B}, {0x1A1E, 0x1A5E}, {0x1A60, 0x1A7C},
{0x1A7F, 0x1A89}, {0x1A90, 0x1A99}, {0x1AA0, 0x1AAD},
{0x1AB0, 0x1AC0}, {0x1B00, 0x1B4B}, {0x1B50, 0x1B7C},
{0x1B80, 0x1BF3}, {0x1BFC, 0x1C37}, {0x1C3B, 0x1C49},
{0x1C4D, 0x1C88}, {0x1C90, 0x1CBA}, {0x1CBD, 0x1CC7},
{0x1CD0, 0x1CFA}, {0x1D00, 0x1DF9}, {0x1DFB, 0x1F15},
{0x1F18, 0x1F1D}, {0x1F20, 0x1F45}, {0x1F48, 0x1F4D},
{0x1F50, 0x1F57}, {0x1F59, 0x1F59}, {0x1F5B, 0x1F5B},
{0x1F5D, 0x1F5D}, {0x1F5F, 0x1F7D}, {0x1F80, 0x1FB4},
{0x1FB6, 0x1FC4}, {0x1FC6, 0x1FD3}, {0x1FD6, 0x1FDB},
{0x1FDD, 0x1FEF}, {0x1FF2, 0x1FF4}, {0x1FF6, 0x1FFE},
{0x2000, 0x200F}, {0x2011, 0x2012}, {0x2017, 0x2017},
{0x201A, 0x201B}, {0x201E, 0x201F}, {0x2023, 0x2023},
{0x2028, 0x202F}, {0x2031, 0x2031}, {0x2034, 0x2034},
{0x2036, 0x203A}, {0x203C, 0x203D}, {0x203F, 0x2064},
{0x2066, 0x2071}, {0x2075, 0x207E}, {0x2080, 0x2080},
{0x2085, 0x208E}, {0x2090, 0x209C}, {0x20A0, 0x20A8},
{0x20AA, 0x20AB}, {0x20AD, 0x20BF}, {0x20D0, 0x20F0},
{0x2100, 0x2102}, {0x2104, 0x2104}, {0x2106, 0x2108},
{0x210A, 0x2112}, {0x2114, 0x2115}, {0x2117, 0x2120},
{0x2123, 0x2125}, {0x2127, 0x212A}, {0x212C, 0x2152},
{0x2155, 0x215A}, {0x215F, 0x215F}, {0x216C, 0x216F},
{0x217A, 0x2188}, {0x218A, 0x218B}, {0x219A, 0x21B7},
{0x21BA, 0x21D1}, {0x21D3, 0x21D3}, {0x21D5, 0x21E6},
{0x21E8, 0x21FF}, {0x2201, 0x2201}, {0x2204, 0x2206},
{0x2209, 0x220A}, {0x220C, 0x220E}, {0x2210, 0x2210},
{0x2212, 0x2214}, {0x2216, 0x2219}, {0x221B, 0x221C},
{0x2221, 0x2222}, {0x2224, 0x2224}, {0x2226, 0x2226},
{0x222D, 0x222D}, {0x222F, 0x2233}, {0x2238, 0x223B},
{0x223E, 0x2247}, {0x2249, 0x224B}, {0x224D, 0x2251},
{0x2253, 0x225F}, {0x2262, 0x2263}, {0x2268, 0x2269},
{0x226C, 0x226D}, {0x2270, 0x2281}, {0x2284, 0x2285},
{0x2288, 0x2294}, {0x2296, 0x2298}, {0x229A, 0x22A4},
{0x22A6, 0x22BE}, {0x22C0, 0x2311}, {0x2313, 0x2319},
{0x231C, 0x2328}, {0x232B, 0x23E8}, {0x23ED, 0x23EF},
{0x23F1, 0x23F2}, {0x23F4, 0x2426}, {0x2440, 0x244A},
{0x24EA, 0x24EA}, {0x254C, 0x254F}, {0x2574, 0x257F},
{0x2590, 0x2591}, {0x2596, 0x259F}, {0x25A2, 0x25A2},
{0x25AA, 0x25B1}, {0x25B4, 0x25B5}, {0x25B8, 0x25BB},
{0x25BE, 0x25BF}, {0x25C2, 0x25C5}, {0x25C9, 0x25CA},
{0x25CC, 0x25CD}, {0x25D2, 0x25E1}, {0x25E6, 0x25EE},
{0x25F0, 0x25FC}, {0x25FF, 0x2604}, {0x2607, 0x2608},
{0x260A, 0x260D}, {0x2610, 0x2613}, {0x2616, 0x261B},
{0x261D, 0x261D}, {0x261F, 0x263F}, {0x2641, 0x2641},
{0x2643, 0x2647}, {0x2654, 0x265F}, {0x2662, 0x2662},
{0x2666, 0x2666}, {0x266B, 0x266B}, {0x266E, 0x266E},
{0x2670, 0x267E}, {0x2680, 0x2692}, {0x2694, 0x269D},
{0x26A0, 0x26A0}, {0x26A2, 0x26A9}, {0x26AC, 0x26BC},
{0x26C0, 0x26C3}, {0x26E2, 0x26E2}, {0x26E4, 0x26E7},
{0x2700, 0x2704}, {0x2706, 0x2709}, {0x270C, 0x2727},
{0x2729, 0x273C}, {0x273E, 0x274B}, {0x274D, 0x274D},
{0x274F, 0x2752}, {0x2756, 0x2756}, {0x2758, 0x2775},
{0x2780, 0x2794}, {0x2798, 0x27AF}, {0x27B1, 0x27BE},
{0x27C0, 0x27E5}, {0x27EE, 0x2984}, {0x2987, 0x2B1A},
{0x2B1D, 0x2B4F}, {0x2B51, 0x2B54}, {0x2B5A, 0x2B73},
{0x2B76, 0x2B95}, {0x2B97, 0x2C2E}, {0x2C30, 0x2C5E},
{0x2C60, 0x2CF3}, {0x2CF9, 0x2D25}, {0x2D27, 0x2D27},
{0x2D2D, 0x2D2D}, {0x2D30, 0x2D67}, {0x2D6F, 0x2D70},
{0x2D7F, 0x2D96}, {0x2DA0, 0x2DA6}, {0x2DA8, 0x2DAE},
{0x2DB0, 0x2DB6}, {0x2DB8, 0x2DBE}, {0x2DC0, 0x2DC6},
{0x2DC8, 0x2DCE}, {0x2DD0, 0x2DD6}, {0x2DD8, 0x2DDE},
{0x2DE0, 0x2E52}, {0x303F, 0x303F}, {0x4DC0, 0x4DFF},
{0xA4D0, 0xA62B}, {0xA640, 0xA6F7}, {0xA700, 0xA7BF},
{0xA7C2, 0xA7CA}, {0xA7F5, 0xA82C}, {0xA830, 0xA839},
{0xA840, 0xA877}, {0xA880, 0xA8C5}, {0xA8CE, 0xA8D9},
{0xA8E0, 0xA953}, {0xA95F, 0xA95F}, {0xA980, 0xA9CD},
{0xA9CF, 0xA9D9}, {0xA9DE, 0xA9FE}, {0xAA00, 0xAA36},
{0xAA40, 0xAA4D}, {0xAA50, 0xAA59}, {0xAA5C, 0xAAC2},
{0xAADB, 0xAAF6}, {0xAB01, 0xAB06}, {0xAB09, 0xAB0E},
{0xAB11, 0xAB16}, {0xAB20, 0xAB26}, {0xAB28, 0xAB2E},
{0xAB30, 0xAB6B}, {0xAB70, 0xABED}, {0xABF0, 0xABF9},
{0xD7B0, 0xD7C6}, {0xD7CB, 0xD7FB}, {0xD800, 0xDFFF},
{0xFB00, 0xFB06}, {0xFB13, 0xFB17}, {0xFB1D, 0xFB36},
{0xFB38, 0xFB3C}, {0xFB3E, 0xFB3E}, {0xFB40, 0xFB41},
{0xFB43, 0xFB44}, {0xFB46, 0xFBC1}, {0xFBD3, 0xFD3F},
{0xFD50, 0xFD8F}, {0xFD92, 0xFDC7}, {0xFDF0, 0xFDFD},
{0xFE20, 0xFE2F}, {0xFE70, 0xFE74}, {0xFE76, 0xFEFC},
{0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFC}, {0x10000, 0x1000B},
{0x1000D, 0x10026}, {0x10028, 0x1003A}, {0x1003C, 0x1003D},
{0x1003F, 0x1004D}, {0x10050, 0x1005D}, {0x10080, 0x100FA},
{0x10100, 0x10102}, {0x10107, 0x10133}, {0x10137, 0x1018E},
{0x10190, 0x1019C}, {0x101A0, 0x101A0}, {0x101D0, 0x101FD},
{0x10280, 0x1029C}, {0x102A0, 0x102D0}, {0x102E0, 0x102FB},
{0x10300, 0x10323}, {0x1032D, 0x1034A}, {0x10350, 0x1037A},
{0x10380, 0x1039D}, {0x1039F, 0x103C3}, {0x103C8, 0x103D5},
{0x10400, 0x1049D}, {0x104A0, 0x104A9}, {0x104B0, 0x104D3},
{0x104D8, 0x104FB}, {0x10500, 0x10527}, {0x10530, 0x10563},
{0x1056F, 0x1056F}, {0x10600, 0x10736}, {0x10740, 0x10755},
{0x10760, 0x10767}, {0x10800, 0x10805}, {0x10808, 0x10808},
{0x1080A, 0x10835}, {0x10837, 0x10838}, {0x1083C, 0x1083C},
{0x1083F, 0x10855}, {0x10857, 0x1089E}, {0x108A7, 0x108AF},
{0x108E0, 0x108F2}, {0x108F4, 0x108F5}, {0x108FB, 0x1091B},
{0x1091F, 0x10939}, {0x1093F, 0x1093F}, {0x10980, 0x109B7},
{0x109BC, 0x109CF}, {0x109D2, 0x10A03}, {0x10A05, 0x10A06},
{0x10A0C, 0x10A13}, {0x10A15, 0x10A17}, {0x10A19, 0x10A35},
{0x10A38, 0x10A3A}, {0x10A3F, 0x10A48}, {0x10A50, 0x10A58},
{0x10A60, 0x10A9F}, {0x10AC0, 0x10AE6}, {0x10AEB, 0x10AF6},
{0x10B00, 0x10B35}, {0x10B39, 0x10B55}, {0x10B58, 0x10B72},
{0x10B78, 0x10B91}, {0x10B99, 0x10B9C}, {0x10BA9, 0x10BAF},
{0x10C00, 0x10C48}, {0x10C80, 0x10CB2}, {0x10CC0, 0x10CF2},
{0x10CFA, 0x10D27}, {0x10D30, 0x10D39}, {0x10E60, 0x10E7E},
{0x10E80, 0x10EA9}, {0x10EAB, 0x10EAD}, {0x10EB0, 0x10EB1},
{0x10F00, 0x10F27}, {0x10F30, 0x10F59}, {0x10FB0, 0x10FCB},
{0x10FE0, 0x10FF6}, {0x11000, 0x1104D}, {0x11052, 0x1106F},
{0x1107F, 0x110C1}, {0x110CD, 0x110CD}, {0x110D0, 0x110E8},
{0x110F0, 0x110F9}, {0x11100, 0x11134}, {0x11136, 0x11147},
{0x11150, 0x11176}, {0x11180, 0x111DF}, {0x111E1, 0x111F4},
{0x11200, 0x11211}, {0x11213, 0x1123E}, {0x11280, 0x11286},
{0x11288, 0x11288}, {0x1128A, 0x1128D}, {0x1128F, 0x1129D},
{0x1129F, 0x112A9}, {0x112B0, 0x112EA}, {0x112F0, 0x112F9},
{0x11300, 0x11303}, {0x11305, 0x1130C}, {0x1130F, 0x11310},
{0x11313, 0x11328}, {0x1132A, 0x11330}, {0x11332, 0x11333},
{0x11335, 0x11339}, {0x1133B, 0x11344}, {0x11347, 0x11348},
{0x1134B, 0x1134D}, {0x11350, 0x11350}, {0x11357, 0x11357},
{0x1135D, 0x11363}, {0x11366, 0x1136C}, {0x11370, 0x11374},
{0x11400, 0x1145B}, {0x1145D, 0x11461}, {0x11480, 0x114C7},
{0x114D0, 0x114D9}, {0x11580, 0x115B5}, {0x115B8, 0x115DD},
{0x11600, 0x11644}, {0x11650, 0x11659}, {0x11660, 0x1166C},
{0x11680, 0x116B8}, {0x116C0, 0x116C9}, {0x11700, 0x1171A},
{0x1171D, 0x1172B}, {0x11730, 0x1173F}, {0x11800, 0x1183B},
{0x118A0, 0x118F2}, {0x118FF, 0x11906}, {0x11909, 0x11909},
{0x1190C, 0x11913}, {0x11915, 0x11916}, {0x11918, 0x11935},
{0x11937, 0x11938}, {0x1193B, 0x11946}, {0x11950, 0x11959},
{0x119A0, 0x119A7}, {0x119AA, 0x119D7}, {0x119DA, 0x119E4},
{0x11A00, 0x11A47}, {0x11A50, 0x11AA2}, {0x11AC0, 0x11AF8},
{0x11C00, 0x11C08}, {0x11C0A, 0x11C36}, {0x11C38, 0x11C45},
{0x11C50, 0x11C6C}, {0x11C70, 0x11C8F}, {0x11C92, 0x11CA7},
{0x11CA9, 0x11CB6}, {0x11D00, 0x11D06}, {0x11D08, 0x11D09},
{0x11D0B, 0x11D36}, {0x11D3A, 0x11D3A}, {0x11D3C, 0x11D3D},
{0x11D3F, 0x11D47}, {0x11D50, 0x11D59}, {0x11D60, 0x11D65},
{0x11D67, 0x11D68}, {0x11D6A, 0x11D8E}, {0x11D90, 0x11D91},
{0x11D93, 0x11D98}, {0x11DA0, 0x11DA9}, {0x11EE0, 0x11EF8},
{0x11FB0, 0x11FB0}, {0x11FC0, 0x11FF1}, {0x11FFF, 0x12399},
{0x12400, 0x1246E}, {0x12470, 0x12474}, {0x12480, 0x12543},
{0x13000, 0x1342E}, {0x13430, 0x13438}, {0x14400, 0x14646},
{0x16800, 0x16A38}, {0x16A40, 0x16A5E}, {0x16A60, 0x16A69},
{0x16A6E, 0x16A6F}, {0x16AD0, 0x16AED}, {0x16AF0, 0x16AF5},
{0x16B00, 0x16B45}, {0x16B50, 0x16B59}, {0x16B5B, 0x16B61},
{0x16B63, 0x16B77}, {0x16B7D, 0x16B8F}, {0x16E40, 0x16E9A},
{0x16F00, 0x16F4A}, {0x16F4F, 0x16F87}, {0x16F8F, 0x16F9F},
{0x1BC00, 0x1BC6A}, {0x1BC70, 0x1BC7C}, {0x1BC80, 0x1BC88},
{0x1BC90, 0x1BC99}, {0x1BC9C, 0x1BCA3}, {0x1D000, 0x1D0F5},
{0x1D100, 0x1D126}, {0x1D129, 0x1D1E8}, {0x1D200, 0x1D245},
{0x1D2E0, 0x1D2F3}, {0x1D300, 0x1D356}, {0x1D360, 0x1D378},
{0x1D400, 0x1D454}, {0x1D456, 0x1D49C}, {0x1D49E, 0x1D49F},
{0x1D4A2, 0x1D4A2}, {0x1D4A5, 0x1D4A6}, {0x1D4A9, 0x1D4AC},
{0x1D4AE, 0x1D4B9}, {0x1D4BB, 0x1D4BB}, {0x1D4BD, 0x1D4C3},
{0x1D4C5, 0x1D505}, {0x1D507, 0x1D50A}, {0x1D50D, 0x1D514},
{0x1D516, 0x1D51C}, {0x1D51E, 0x1D539}, {0x1D53B, 0x1D53E},
{0x1D540, 0x1D544}, {0x1D546, 0x1D546}, {0x1D54A, 0x1D550},
{0x1D552, 0x1D6A5}, {0x1D6A8, 0x1D7CB}, {0x1D7CE, 0x1DA8B},
{0x1DA9B, 0x1DA9F}, {0x1DAA1, 0x1DAAF}, {0x1E000, 0x1E006},
{0x1E008, 0x1E018}, {0x1E01B, 0x1E021}, {0x1E023, 0x1E024},
{0x1E026, 0x1E02A}, {0x1E100, 0x1E12C}, {0x1E130, 0x1E13D},
{0x1E140, 0x1E149}, {0x1E14E, 0x1E14F}, {0x1E2C0, 0x1E2F9},
{0x1E2FF, 0x1E2FF}, {0x1E800, 0x1E8C4}, {0x1E8C7, 0x1E8D6},
{0x1E900, 0x1E94B}, {0x1E950, 0x1E959}, {0x1E95E, 0x1E95F},
{0x1EC71, 0x1ECB4}, {0x1ED01, 0x1ED3D}, {0x1EE00, 0x1EE03},
{0x1EE05, 0x1EE1F}, {0x1EE21, 0x1EE22}, {0x1EE24, 0x1EE24},
{0x1EE27, 0x1EE27}, {0x1EE29, 0x1EE32}, {0x1EE34, 0x1EE37},
{0x1EE39, 0x1EE39}, {0x1EE3B, 0x1EE3B}, {0x1EE42, 0x1EE42},
{0x1EE47, 0x1EE47}, {0x1EE49, 0x1EE49}, {0x1EE4B, 0x1EE4B},
{0x1EE4D, 0x1EE4F}, {0x1EE51, 0x1EE52}, {0x1EE54, 0x1EE54},
{0x1EE57, 0x1EE57}, {0x1EE59, 0x1EE59}, {0x1EE5B, 0x1EE5B},
{0x1EE5D, 0x1EE5D}, {0x1EE5F, 0x1EE5F}, {0x1EE61, 0x1EE62},
{0x1EE64, 0x1EE64}, {0x1EE67, 0x1EE6A}, {0x1EE6C, 0x1EE72},
{0x1EE74, 0x1EE77}, {0x1EE79, 0x1EE7C}, {0x1EE7E, 0x1EE7E},
{0x1EE80, 0x1EE89}, {0x1EE8B, 0x1EE9B}, {0x1EEA1, 0x1EEA3},
{0x1EEA5, 0x1EEA9}, {0x1EEAB, 0x1EEBB}, {0x1EEF0, 0x1EEF1},
{0x1F000, 0x1F003}, {0x1F005, 0x1F02B}, {0x1F030, 0x1F093},
{0x1F0A0, 0x1F0AE}, {0x1F0B1, 0x1F0BF}, {0x1F0C1, 0x1F0CE},
{0x1F0D1, 0x1F0F5}, {0x1F10B, 0x1F10F}, {0x1F12E, 0x1F12F},
{0x1F16A, 0x1F16F}, {0x1F1AD, 0x1F1AD}, {0x1F1E6, 0x1F1FF},
{0x1F321, 0x1F32C}, {0x1F336, 0x1F336}, {0x1F37D, 0x1F37D},
{0x1F394, 0x1F39F}, {0x1F3CB, 0x1F3CE}, {0x1F3D4, 0x1F3DF},
{0x1F3F1, 0x1F3F3}, {0x1F3F5, 0x1F3F7}, {0x1F43F, 0x1F43F},
{0x1F441, 0x1F441}, {0x1F4FD, 0x1F4FE}, {0x1F53E, 0x1F54A},
{0x1F54F, 0x1F54F}, {0x1F568, 0x1F579}, {0x1F57B, 0x1F594},
{0x1F597, 0x1F5A3}, {0x1F5A5, 0x1F5FA}, {0x1F650, 0x1F67F},
{0x1F6C6, 0x1F6CB}, {0x1F6CD, 0x1F6CF}, {0x1F6D3, 0x1F6D4},
{0x1F6E0, 0x1F6EA}, {0x1F6F0, 0x1F6F3}, {0x1F700, 0x1F773},
{0x1F780, 0x1F7D8}, {0x1F800, 0x1F80B}, {0x1F810, 0x1F847},
{0x1F850, 0x1F859}, {0x1F860, 0x1F887}, {0x1F890, 0x1F8AD},
{0x1F8B0, 0x1F8B1}, {0x1F900, 0x1F90B}, {0x1F93B, 0x1F93B},
{0x1F946, 0x1F946}, {0x1FA00, 0x1FA53}, {0x1FA60, 0x1FA6D},
{0x1FB00, 0x1FB92}, {0x1FB94, 0x1FBCA}, {0x1FBF0, 0x1FBF9},
{0xE0001, 0xE0001}, {0xE0020, 0xE007F},
}
var emoji = table{
{0x203C, 0x203C}, {0x2049, 0x2049}, {0x2122, 0x2122},
{0x2139, 0x2139}, {0x2194, 0x2199}, {0x21A9, 0x21AA},
{0x231A, 0x231B}, {0x2328, 0x2328}, {0x2388, 0x2388},
{0x23CF, 0x23CF}, {0x23E9, 0x23F3}, {0x23F8, 0x23FA},
{0x24C2, 0x24C2}, {0x25AA, 0x25AB}, {0x25B6, 0x25B6},
{0x25C0, 0x25C0}, {0x25FB, 0x25FE}, {0x2600, 0x2605},
{0x2607, 0x2612}, {0x2614, 0x2685}, {0x2690, 0x2705},
{0x2708, 0x2712}, {0x2714, 0x2714}, {0x2716, 0x2716},
{0x271D, 0x271D}, {0x2721, 0x2721}, {0x2728, 0x2728},
{0x2733, 0x2734}, {0x2744, 0x2744}, {0x2747, 0x2747},
{0x274C, 0x274C}, {0x274E, 0x274E}, {0x2753, 0x2755},
{0x2757, 0x2757}, {0x2763, 0x2767}, {0x2795, 0x2797},
{0x27A1, 0x27A1}, {0x27B0, 0x27B0}, {0x27BF, 0x27BF},
{0x2934, 0x2935}, {0x2B05, 0x2B07}, {0x2B1B, 0x2B1C},
{0x2B50, 0x2B50}, {0x2B55, 0x2B55}, {0x3030, 0x3030},
{0x303D, 0x303D}, {0x3297, 0x3297}, {0x3299, 0x3299},
{0x1F000, 0x1F0FF}, {0x1F10D, 0x1F10F}, {0x1F12F, 0x1F12F},
{0x1F16C, 0x1F171}, {0x1F17E, 0x1F17F}, {0x1F18E, 0x1F18E},
{0x1F191, 0x1F19A}, {0x1F1AD, 0x1F1E5}, {0x1F201, 0x1F20F},
{0x1F21A, 0x1F21A}, {0x1F22F, 0x1F22F}, {0x1F232, 0x1F23A},
{0x1F23C, 0x1F23F}, {0x1F249, 0x1F3FA}, {0x1F400, 0x1F53D},
{0x1F546, 0x1F64F}, {0x1F680, 0x1F6FF}, {0x1F774, 0x1F77F},
{0x1F7D5, 0x1F7FF}, {0x1F80C, 0x1F80F}, {0x1F848, 0x1F84F},
{0x1F85A, 0x1F85F}, {0x1F888, 0x1F88F}, {0x1F8AE, 0x1F8FF},
{0x1F90C, 0x1F93A}, {0x1F93C, 0x1F945}, {0x1F947, 0x1FAFF},
{0x1FC00, 0x1FFFD},
}

View File

@@ -1,28 +0,0 @@
// +build windows
// +build !appengine
package runewidth
import (
"syscall"
)
var (
kernel32 = syscall.NewLazyDLL("kernel32")
procGetConsoleOutputCP = kernel32.NewProc("GetConsoleOutputCP")
)
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
r1, _, _ := procGetConsoleOutputCP.Call()
if r1 == 0 {
return false
}
switch int(r1) {
case 932, 51932, 936, 949, 950:
return true
}
return false
}

View File

@@ -1,2 +0,0 @@
# Exclude MacOS attribute files.
.DS_Store

View File

@@ -1,17 +0,0 @@
language: go
go:
- 1.14.x
- 1.15.x
- stable
script:
- go get -t ./...
- go get -u golang.org/x/lint/golint
- go test ./...
- CGO_ENABLED=1 go test -race ./...
- go vet ./...
- diff -u <(echo -n) <(gofmt -d -s .)
- diff -u <(echo -n) <(./internal/scripts/autogen_licences.sh .)
- diff -u <(echo -n) <(golint ./...)
env:
global:
- CGO_ENABLED=0

View File

@@ -1,361 +0,0 @@
# Changelog
All notable changes to this project are documented here.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.12.2] - 31-Aug-2020
### Fixed
- advanced the CI Go versions up to Go 1.15.
- fixed the build status badge to correctly point to travis-ci.com instead of
travis-ci.org.
## [0.12.1] - 20-Jun-2020
### Fixed
- the `tcell` unit test can now pass in headless mode (when TERM="") which
happens under bazel.
- switching coveralls integration to Github application.
## [0.12.0] - 10-Apr-2020
### Added
- Migrating to [Go modules](https://blog.golang.org/using-go-modules).
- Renamed directory `internal` to `private` so that external widget development
is possible. Noted in
[README.md](https://github.com/mum4k/termdash/blob/master/README.md) that packages in the
`private` directory don't have any API stability guarantee.
## [0.11.0] - 7-Mar-2020
#### Breaking API changes
- Termdash now requires at least Go version 1.11.
### Added
- New [`tcell`](https://github.com/gdamore/tcell) based terminal implementation
which implements the `terminalapi.Terminal` interface.
- tcell implementation supports two initialization `Option`s:
- `ColorMode` the terminal color output mode (defaults to 256 color mode)
- `ClearStyle` the foreground and background color style to use when clearing
the screen (defaults to the global ColorDefault for both foreground and
background)
### Fixed
- Improved test coverage of the `Gauge` widget.
## [0.10.0] - 5-Jun-2019
### Added
- Added `time.Duration` based `ValueFormatter` for the `LineChart` Y-axis labels.
- Added round and suffix `ValueFormatter` for the `LineChart` Y-axis labels.
- Added decimal and suffix `ValueFormatter` for the `LineChart` Y-axis labels.
- Added a `container.SplitOption` that allows fixed size container splits.
- Added `grid` functions that allow fixed size rows and columns.
### Changed
- The `LineChart` can format the labels on the Y-axis with a `ValueFormatter`.
- The `SegmentDisplay` can now display dots and colons ('.' and ':').
- The `Donut` widget now guarantees spacing between the donut and its label.
- The continuous build on Travis CI now builds with cgo explicitly disabled to
ensure both Termdash and its dependencies use pure Go.
### Fixed
- Lint issues found on the Go report card.
- An internal library belonging to the `Text` widget was incorrectly passing
`math.MaxUint32` as an int argument.
## [0.9.1] - 15-May-2019
### Fixed
- Termdash could deadlock when a `Button` or a `TextInput` was configured to
call the `Container.Update` method.
## [0.9.0] - 28-Apr-2019
### Added
- The `TextInput` widget, an input field allowing interactive text input.
- The `Donut` widget can now display an optional text label under the donut.
### Changed
- Widgets now get information whether their container is focused when Draw is
executed.
- The SegmentDisplay widget now has a method that returns the observed character
capacity the last time Draw was called.
- The grid.Builder API now allows users to specify options for intermediate
containers, i.e. containers that don't have widgets, but represent rows and
columns.
- Line chart widget now allows `math.NaN` values to represent "no value" (values
that will not be rendered) in the values slice.
#### Breaking API changes
- The widgetapi.Widget.Draw method now accepts a second argument which provides
widgets with additional metadata. This affects all implemented widgets.
- Termdash now requires at least Go version 1.10, which allows us to utilize
`math.Round` instead of our own implementation and `strings.Builder` instead
of `bytes.Buffer`.
- Terminal shortcuts like `Ctrl-A` no longer come as two separate events,
Termdash now mirrors termbox-go and sends these as one event.
## [0.8.0] - 30-Mar-2019
### Added
- New API for building layouts, a grid.Builder. Allows defining the layout
iteratively as repetitive Elements, Rows and Columns.
- Containers now support margin around them and padding of their content.
- Container now supports dynamic layout changes via the new Update method.
### Changed
- The Text widget now supports content wrapping on word boundaries.
- The BarChart and SparkLine widgets now have a method that returns the
observed value capacity the last time Draw was called.
- Moving widgetapi out of the internal directory to allow external users to
develop their own widgets.
- Event delivery to widgets now has a stable defined order and happens when the
container is unlocked so that widgets can trigger dynamic layout changes.
### Fixed
- The termdash_test now correctly waits until all subscribers processed events,
not just received them.
- Container focus tracker now correctly tracks focus changes in enlarged areas,
i.e. when the terminal size increased.
- The BarChart, LineChart and SegmentDisplay widgets now protect against
external mutation of the values passed into them by copying the data they
receive.
## [0.7.2] - 25-Feb-2019
### Added
- Test coverage for data only packages.
### Changed
- Refactoring packages that contained a mix of public and internal identifiers.
#### Breaking API changes
The following packages were refactored, no impact is expected as the removed
identifiers shouldn't be used externally.
- Functions align.Text and align.Rectangle were moved to a new
internal/alignfor package.
- Types cell.Cell and cell.Buffer were moved into a new internal/canvas/buffer
package.
## [0.7.1] - 24-Feb-2019
### Fixed
- Some of the packages that were moved into internal are required externally.
This release makes them available again.
### Changed
#### Breaking API changes
- The draw.LineStyle enum was refactored into its own package
linestyle.LineStyle. Users will have to replace:
- draw.LineStyleNone -> linestyle.None
- draw.LineStyleLight -> linestyle.Light
- draw.LineStyleDouble -> linestyle.Double
- draw.LineStyleRound -> linestyle.Round
## [0.7.0] - 24-Feb-2019
### Added
#### New widgets
- The Button widget.
#### Improvements to documentation
- Clearly marked the public API surface by moving private packages into
internal directory.
- Started a GitHub wiki for Termdash.
#### Improvements to the LineChart widget
- The LineChart widget can display X axis labels in vertical orientation.
- The LineChart widget allows the user to specify a custom scale for the Y
axis.
- The LineChart widget now has an option that disables scaling of the X axis.
Useful for applications that want to continuously feed data and make them
"roll" through the linechart.
- The LineChart widget now has a method that returns the observed capacity of
the LineChart the last time Draw was called.
- The LineChart widget now supports zoom of the content triggered by mouse
events.
#### Improvements to the Text widget
- The Text widget now has a Write option that atomically replaces the entire
text content.
#### Improvements to the infrastructure
- A function that draws text vertically.
- A non-blocking event distribution system that can throttle repetitive events.
- Generalized mouse button FSM for use in widgets that need to track mouse
button clicks.
### Changed
- Termbox is now initialized in 256 color mode by default.
- The infrastructure now uses the non-blocking event distribution system to
distribute events to subscribers. Each widget is now an individual
subscriber.
- The infrastructure now throttles event driven screen redraw rather than
redrawing for each input event.
- Widgets can now specify the scope at which they want to receive keyboard and
mouse events.
#### Breaking API changes
##### High impact
- The constructors of all the widgets now also return an error so that they
can validate the options. This is a breaking change for the following
widgets: BarChart, Gauge, LineChart, SparkLine, Text. The callers will have
to handle the returned error.
##### Low impact
- The container package no longer exports separate methods to receive Keyboard
and Mouse events which were replaced by a Subscribe method for the event
distribution system. This shouldn't affect users as the removed methods
aren't needed by container users.
- The widgetapi.Options struct now uses an enum instead of a boolean when
widget specifies if it wants keyboard or mouse events. This only impacts
development of new widgets.
### Fixed
- The LineChart widget now correctly determines the Y axis scale when multiple
series are provided.
- Lint issues in the codebase, and updated Travis configuration so that golint
is executed on every run.
- Termdash now correctly starts in locales like zh_CN.UTF-8 where some of the
characters it uses internally can have ambiguous width.
## [0.6.1] - 12-Feb-2019
### Fixed
- The LineChart widget now correctly places custom labels.
## [0.6.0] - 07-Feb-2019
### Added
- The SegmentDisplay widget.
- A CHANGELOG.
- New line styles for borders.
### Changed
- Better recordings of the individual demos.
### Fixed
- The LineChart now has an option to change the behavior of the Y axis from
zero anchored to adaptive.
- Lint errors reported on the Go report card.
- Widgets now correctly handle a race when new user data are supplied between
calls to their Options() and Draw() methods.
## [0.5.0] - 21-Jan-2019
### Added
- Draw primitives for drawing circles.
- The Donut widget.
### Fixed
- Bugfixes in the braille canvas.
- Lint errors reported on the Go report card.
- Flaky behavior in termdash_test.
## [0.4.0] - 15-Jan-2019
### Added
- 256 color support.
- Variable size container splits.
- A more complete demo of the functionality.
### Changed
- Updated documentation and README.
## [0.3.0] - 13-Jan-2019
### Added
- Primitives for drawing lines.
- Implementation of a Braille canvas.
- The LineChart widget.
## [0.2.0] - 02-Jul-2018
### Added
- The SparkLine widget.
- The BarChart widget.
- Manually triggered redraw.
- Travis now checks for presence of licence headers.
### Fixed
- Fixing races in termdash_test.
## 0.1.0 - 13-Jun-2018
### Added
- Documentation of the project and its goals.
- Drawing infrastructure.
- Testing infrastructure.
- The Gauge widget.
- The Text widget.
[unreleased]: https://github.com/mum4k/termdash/compare/v0.12.2...devel
[0.12.2]: https://github.com/mum4k/termdash/compare/v0.12.1...v0.12.2
[0.12.1]: https://github.com/mum4k/termdash/compare/v0.12.0...v0.12.1
[0.12.0]: https://github.com/mum4k/termdash/compare/v0.11.0...v0.12.0
[0.11.0]: https://github.com/mum4k/termdash/compare/v0.10.0...v0.11.0
[0.10.0]: https://github.com/mum4k/termdash/compare/v0.9.1...v0.10.0
[0.9.1]: https://github.com/mum4k/termdash/compare/v0.9.0...v0.9.1
[0.9.0]: https://github.com/mum4k/termdash/compare/v0.8.0...v0.9.0
[0.8.0]: https://github.com/mum4k/termdash/compare/v0.7.2...v0.8.0
[0.7.2]: https://github.com/mum4k/termdash/compare/v0.7.1...v0.7.2
[0.7.1]: https://github.com/mum4k/termdash/compare/v0.7.0...v0.7.1
[0.7.0]: https://github.com/mum4k/termdash/compare/v0.6.1...v0.7.0
[0.6.1]: https://github.com/mum4k/termdash/compare/v0.6.0...v0.6.1
[0.6.0]: https://github.com/mum4k/termdash/compare/v0.5.0...v0.6.0
[0.5.0]: https://github.com/mum4k/termdash/compare/v0.4.0...v0.5.0
[0.4.0]: https://github.com/mum4k/termdash/compare/v0.3.0...v0.4.0
[0.3.0]: https://github.com/mum4k/termdash/compare/v0.2.0...v0.3.0
[0.2.0]: https://github.com/mum4k/termdash/compare/v0.1.0...v0.2.0

View File

@@ -1,38 +0,0 @@
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Fork and merge into the "devel" branch
All development in termdash repository must happen in the [devel
branch](https://github.com/mum4k/termdash/tree/devel). The devel branch is
merged into the master branch during release of each new version.
When you fork the termdash repository, be sure to checkout the devel branch.
When you are creating a pull request, be sure to pull back into the devel
branch.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution;
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
## Community Guidelines
This project follows [Google's Open Source Community
Guidelines](https://opensource.google.com/conduct/).

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,215 +0,0 @@
[![Doc Status](https://godoc.org/github.com/mum4k/termdash?status.png)](https://godoc.org/github.com/mum4k/termdash)
[![Build Status](https://travis-ci.com/mum4k/termdash.svg?branch=master)](https://travis-ci.com/mum4k/termdash)
[![Sourcegraph](https://sourcegraph.com/github.com/mum4k/termdash/-/badge.svg)](https://sourcegraph.com/github.com/mum4k/termdash?badge)
[![Coverage Status](https://coveralls.io/repos/github/mum4k/termdash/badge.svg?branch=master)](https://coveralls.io/github/mum4k/termdash?branch=master)
[![Go Report Card](https://goreportcard.com/badge/github.com/mum4k/termdash)](https://goreportcard.com/report/github.com/mum4k/termdash)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/mum4k/termdash/blob/master/LICENSE)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
# [<img src="./doc/images/termdash.png" alt="termdashlogo" type="image/png" width="30%">](http://github.com/mum4k/termdash/wiki)
Termdash is a cross-platform customizable terminal based dashboard.
[<img src="./doc/images/termdashdemo_0_9_0.gif" alt="termdashdemo" type="image/gif">](termdashdemo/termdashdemo.go)
The feature set is inspired by the
[gizak/termui](http://github.com/gizak/termui) project, which in turn was
inspired by
[yaronn/blessed-contrib](http://github.com/yaronn/blessed-contrib).
This rewrite focuses on code readability, maintainability and testability, see
the [design goals](doc/design_goals.md). It aims to achieve the following
[requirements](doc/requirements.md). See the [high-level design](doc/hld.md)
for more details.
# Public API and status
The public API surface is documented in the
[wiki](http://github.com/mum4k/termdash/wiki).
Private packages can be identified by the presence of the **/private/**
directory in their import path. Stability of the private packages isn't
guaranteed and changes won't be backward compatible.
There might still be breaking changes to the public API, at least until the
project reaches version 1.0.0. Any breaking changes will be published in the
[changelog](CHANGELOG.md).
# Current feature set
- Full support for terminal window resizing throughout the infrastructure.
- Customizable layout, widget placement, borders, margins, padding, colors, etc.
- Dynamic layout changes at runtime.
- Binary tree and Grid forms of setting up the layout.
- Focusable containers and widgets.
- Processing of keyboard and mouse events.
- Periodic and event driven screen redraw.
- A library of widgets, see below.
- UTF-8 for all text elements.
- Drawing primitives (Go functions) for widget development with character and
sub-character resolution.
# Installation
To install this library, run the following:
```go
go get -u github.com/mum4k/termdash
```
# Usage
The usage of most of these elements is demonstrated in
[termdashdemo.go](termdashdemo/termdashdemo.go). To execute the demo:
```go
go run github.com/mum4k/termdash/termdashdemo/termdashdemo.go
```
# Documentation
Please refer to the [Termdash wiki](http://github.com/mum4k/termdash/wiki) for
all documentation and resources.
# Implemented Widgets
## The Button
Allows users to interact with the application, each button press runs a callback function.
Run the
[buttondemo](widgets/button/buttondemo/buttondemo.go).
```go
go run github.com/mum4k/termdash/widgets/button/buttondemo/buttondemo.go
```
[<img src="./doc/images/buttondemo.gif" alt="buttondemo" type="image/gif" width="50%">](widgets/button/buttondemo/buttondemo.go)
## The TextInput
Allows users to interact with the application by entering, editing and
submitting text data. Run the
[textinputdemo](widgets/textinput/textinputdemo/textinputdemo.go).
```go
go run github.com/mum4k/termdash/widgets/textinput/textinputdemo/textinputdemo.go
```
[<img src="./doc/images/textinputdemo.gif" alt="textinputdemo" type="image/gif" width="80%">](widgets/textinput/textinputdemo/textinputdemo.go)
## The Gauge
Displays the progress of an operation. Run the
[gaugedemo](widgets/gauge/gaugedemo/gaugedemo.go).
```go
go run github.com/mum4k/termdash/widgets/gauge/gaugedemo/gaugedemo.go
```
[<img src="./doc/images/gaugedemo.gif" alt="gaugedemo" type="image/gif">](widgets/gauge/gaugedemo/gaugedemo.go)
## The Donut
Visualizes progress of an operation as a partial or a complete donut. Run the
[donutdemo](widgets/donut/donutdemo/donutdemo.go).
```go
go run github.com/mum4k/termdash/widgets/donut/donutdemo/donutdemo.go
```
[<img src="./doc/images/donutdemo.gif" alt="donutdemo" type="image/gif">](widgets/donut/donutdemo/donutdemo.go)
## The Text
Displays text content, supports trimming and scrolling of content. Run the
[textdemo](widgets/text/textdemo/textdemo.go).
```go
go run github.com/mum4k/termdash/widgets/text/textdemo/textdemo.go
```
[<img src="./doc/images/textdemo.gif" alt="textdemo" type="image/gif">](widgets/text/textdemo/textdemo.go)
## The SparkLine
Draws a graph showing a series of values as vertical bars. The bars can have
sub-cell height. Run the
[sparklinedemo](widgets/sparkline/sparklinedemo/sparklinedemo.go).
```go
go run github.com/mum4k/termdash/widgets/sparkline/sparklinedemo/sparklinedemo.go
```
[<img src="./doc/images/sparklinedemo.gif" alt="sparklinedemo" type="image/gif" width="50%">](widgets/sparkline/sparklinedemo/sparklinedemo.go)
## The BarChart
Displays multiple bars showing relative ratios of values. Run the
[barchartdemo](widgets/barchart/barchartdemo/barchartdemo.go).
```go
go run github.com/mum4k/termdash/widgets/barchart/barchartdemo/barchartdemo.go
```
[<img src="./doc/images/barchartdemo.gif" alt="barchartdemo" type="image/gif" width="50%">](widgets/barchart/barchartdemo/barchartdemo.go)
## The LineChart
Displays series of values on a line chart, supports zoom triggered by mouse
events. Run the
[linechartdemo](widgets/linechart/linechartdemo/linechartdemo.go).
```go
go run github.com/mum4k/termdash/widgets/linechart/linechartdemo/linechartdemo.go
```
[<img src="./doc/images/linechartdemo.gif" alt="linechartdemo" type="image/gif" width="70%">](widgets/linechart/linechartdemo/linechartdemo.go)
## The SegmentDisplay
Displays text by simulating a 16-segment display. Run the
[segmentdisplaydemo](widgets/segmentdisplay/segmentdisplaydemo/segmentdisplaydemo.go).
```go
go run github.com/mum4k/termdash/widgets/segmentdisplay/segmentdisplaydemo/segmentdisplaydemo.go
```
[<img src="./doc/images/segmentdisplaydemo.gif" alt="segmentdisplaydemo" type="image/gif">](widgets/segmentdisplay/segmentdisplaydemo/segmentdisplaydemo.go)
# Contributing
If you are willing to contribute, improve the infrastructure or develop a
widget, first of all Thank You! Your help is appreciated.
Please see the [CONTRIBUTING.md](CONTRIBUTING.md) file for guidelines related
to the Google's CLA, and code review requirements.
As stated above the primary goal of this project is to develop readable, well
designed code, the functionality and efficiency come second. This is achieved
through detailed code reviews, design discussions and following of the [design
guidelines](doc/design_guidelines.md). Please familiarize yourself with these
before contributing.
If you're developing a new widget, please see the [widget
development](doc/widget_development.md) section.
Termdash uses [this branching model](https://nvie.com/posts/a-successful-git-branching-model/). When you fork the repository, base your changes off the [devel](https://github.com/mum4k/termdash/tree/devel) branch and the pull request should merge it back to the devel branch. Commits to the master branch are limited to releases, major bug fixes and documentation updates.
# Similar projects in Go
- [clui](https://github.com/VladimirMarkelov/clui)
- [gocui](https://github.com/jroimartin/gocui)
- [gowid](https://github.com/gcla/gowid)
- [termui](https://github.com/gizak/termui)
- [tui-go](https://github.com/marcusolsson/tui-go)
- [tview](https://github.com/rivo/tview)
# Projects using Termdash
- [datadash](https://github.com/keithknott26/datadash): Visualize streaming or tabular data inside the terminal.
- [grafterm](https://github.com/slok/grafterm): Metrics dashboards visualization on the terminal.
- [perfstat](https://github.com/flaviostutz/perfstat): Analyze and show tips about possible bottlenecks in Linux systems.
# Disclaimer
This is not an official Google product.

View File

@@ -1,70 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package align defines constants representing types of alignment.
package align
// Horizontal indicates the type of horizontal alignment.
type Horizontal int
// String implements fmt.Stringer()
func (h Horizontal) String() string {
if n, ok := horizontalNames[h]; ok {
return n
}
return "HorizontalUnknown"
}
// horizontalNames maps Horizontal values to human readable names.
var horizontalNames = map[Horizontal]string{
HorizontalLeft: "HorizontalLeft",
HorizontalCenter: "HorizontalCenter",
HorizontalRight: "HorizontalRight",
}
const (
// HorizontalLeft is left alignment along the horizontal axis.
HorizontalLeft Horizontal = iota
// HorizontalCenter is center alignment along the horizontal axis.
HorizontalCenter
// HorizontalRight is right alignment along the horizontal axis.
HorizontalRight
)
// Vertical indicates the type of vertical alignment.
type Vertical int
// String implements fmt.Stringer()
func (v Vertical) String() string {
if n, ok := verticalNames[v]; ok {
return n
}
return "VerticalUnknown"
}
// verticalNames maps Vertical values to human readable names.
var verticalNames = map[Vertical]string{
VerticalTop: "VerticalTop",
VerticalMiddle: "VerticalMiddle",
VerticalBottom: "VerticalBottom",
}
const (
// VerticalTop is top alignment along the vertical axis.
VerticalTop Vertical = iota
// VerticalMiddle is middle alignment along the vertical axis.
VerticalMiddle
// VerticalBottom is bottom alignment along the vertical axis.
VerticalBottom
)

View File

@@ -1,64 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package cell implements cell options and attributes.
package cell
// Option is used to provide options for cells on a 2-D terminal.
type Option interface {
// Set sets the provided option.
Set(*Options)
}
// Options stores the provided options.
type Options struct {
FgColor Color
BgColor Color
}
// Set allows existing options to be passed as an option.
func (o *Options) Set(other *Options) {
*other = *o
}
// NewOptions returns a new Options instance after applying the provided options.
func NewOptions(opts ...Option) *Options {
o := &Options{}
for _, opt := range opts {
opt.Set(o)
}
return o
}
// option implements Option.
type option func(*Options)
// Set implements Option.set.
func (co option) Set(opts *Options) {
co(opts)
}
// FgColor sets the foreground color of the cell.
func FgColor(color Color) Option {
return option(func(co *Options) {
co.FgColor = color
})
}
// BgColor sets the background color of the cell.
func BgColor(color Color) Option {
return option(func(co *Options) {
co.BgColor = color
})
}

View File

@@ -1,106 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cell
import (
"fmt"
)
// color.go defines constants for cell colors.
// Color is the color of a cell.
type Color int
// String implements fmt.Stringer()
func (cc Color) String() string {
if n, ok := colorNames[cc]; ok {
return n
}
return fmt.Sprintf("Color:%d", cc)
}
// colorNames maps Color values to human readable names.
var colorNames = map[Color]string{
ColorDefault: "ColorDefault",
ColorBlack: "ColorBlack",
ColorRed: "ColorRed",
ColorGreen: "ColorGreen",
ColorYellow: "ColorYellow",
ColorBlue: "ColorBlue",
ColorMagenta: "ColorMagenta",
ColorCyan: "ColorCyan",
ColorWhite: "ColorWhite",
}
// The supported terminal colors.
const (
ColorDefault Color = iota
// 8 "system" colors.
ColorBlack
ColorRed
ColorGreen
ColorYellow
ColorBlue
ColorMagenta
ColorCyan
ColorWhite
)
// ColorNumber sets a color using its number.
// Make sure your terminal is set to a terminalapi.ColorMode that supports the
// target color. The provided value must be in the range 0-255.
// Larger or smaller values will be reset to the default color.
//
// For reference on these colors see the Xterm number in:
// https://jonasjacek.github.io/colors/
func ColorNumber(n int) Color {
if n < 0 || n > 255 {
return ColorDefault
}
return Color(n + 1) // Colors are off-by-one due to ColorDefault being zero.
}
// ColorRGB6 sets a color using the 6x6x6 terminal color.
// Make sure your terminal is set to the terminalapi.ColorMode256 mode.
// The provided values (r, g, b) must be in the range 0-5.
// Larger or smaller values will be reset to the default color.
//
// For reference on these colors see:
// https://superuser.com/questions/783656/whats-the-deal-with-terminal-colors
func ColorRGB6(r, g, b int) Color {
for _, c := range []int{r, g, b} {
if c < 0 || c > 5 {
return ColorDefault
}
}
return Color(0x10 + 36*r + 6*g + b + 1) // Colors are off-by-one due to ColorDefault being zero.
}
// ColorRGB24 sets a color using the 24 bit web color scheme.
// Make sure your terminal is set to the terminalapi.ColorMode256 mode.
// The provided values (r, g, b) must be in the range 0-255.
// Larger or smaller values will be reset to the default color.
//
// For reference on these colors see the RGB column in:
// https://jonasjacek.github.io/colors/
func ColorRGB24(r, g, b int) Color {
for _, c := range []int{r, g, b} {
if c < 0 || c > 255 {
return ColorDefault
}
}
return ColorRGB6(r/51, g/51, b/51)
}

View File

@@ -1,471 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package container defines a type that wraps other containers or widgets.
The container supports splitting container into sub containers, defining
container styles and placing widgets. The container also creates and manages
canvases assigned to the placed widgets.
*/
package container
import (
"errors"
"fmt"
"image"
"sync"
"github.com/mum4k/termdash/linestyle"
"github.com/mum4k/termdash/private/alignfor"
"github.com/mum4k/termdash/private/area"
"github.com/mum4k/termdash/private/event"
"github.com/mum4k/termdash/terminal/terminalapi"
"github.com/mum4k/termdash/widgetapi"
)
// Container wraps either sub containers or widgets and positions them on the
// terminal.
// This is thread-safe.
type Container struct {
// parent is the parent container, nil if this is the root container.
parent *Container
// The sub containers, if these aren't nil, the widget must be.
first *Container
second *Container
// term is the terminal this container is placed on.
// All containers in the tree share the same terminal.
term terminalapi.Terminal
// focusTracker tracks the active (focused) container.
// All containers in the tree share the same tracker.
focusTracker *focusTracker
// area is the area of the terminal this container has access to.
// Initialized the first time Draw is called.
area image.Rectangle
// opts are the options provided to the container.
opts *options
// clearNeeded indicates if the terminal needs to be cleared next time we
// are clearNeeded the container.
// This is required if the container was updated and thus the layout might
// have changed.
clearNeeded bool
// mu protects the container tree.
// All containers in the tree share the same lock.
mu *sync.Mutex
}
// String represents the container metadata in a human readable format.
// Implements fmt.Stringer.
func (c *Container) String() string {
return fmt.Sprintf("Container@%p{parent:%p, first:%p, second:%p, area:%+v}", c, c.parent, c.first, c.second, c.area)
}
// New returns a new root container that will use the provided terminal and
// applies the provided options.
func New(t terminalapi.Terminal, opts ...Option) (*Container, error) {
root := &Container{
term: t,
opts: newOptions( /* parent = */ nil),
mu: &sync.Mutex{},
}
// Initially the root is focused.
root.focusTracker = newFocusTracker(root)
if err := applyOptions(root, opts...); err != nil {
return nil, err
}
if err := validateOptions(root); err != nil {
return nil, err
}
return root, nil
}
// newChild creates a new child container of the given parent.
func newChild(parent *Container, opts []Option) (*Container, error) {
child := &Container{
parent: parent,
term: parent.term,
focusTracker: parent.focusTracker,
opts: newOptions(parent.opts),
mu: parent.mu,
}
if err := applyOptions(child, opts...); err != nil {
return nil, err
}
return child, nil
}
// hasBorder determines if this container has a border.
func (c *Container) hasBorder() bool {
return c.opts.border != linestyle.None
}
// hasWidget determines if this container has a widget.
func (c *Container) hasWidget() bool {
return c.opts.widget != nil
}
// usable returns the usable area in this container.
// This depends on whether the container has a border, etc.
func (c *Container) usable() image.Rectangle {
if c.hasBorder() {
return area.ExcludeBorder(c.area)
}
return c.area
}
// widgetArea returns the area in the container that is available for the
// widget's canvas. Takes the container border, widget's requested maximum size
// and ratio and container's alignment into account.
// Returns a zero area if the container has no widget.
func (c *Container) widgetArea() (image.Rectangle, error) {
if !c.hasWidget() {
return image.ZR, nil
}
padded, err := c.opts.padding.apply(c.usable())
if err != nil {
return image.ZR, err
}
wOpts := c.opts.widget.Options()
adjusted := padded
if maxX := wOpts.MaximumSize.X; maxX > 0 && adjusted.Dx() > maxX {
adjusted.Max.X -= adjusted.Dx() - maxX
}
if maxY := wOpts.MaximumSize.Y; maxY > 0 && adjusted.Dy() > maxY {
adjusted.Max.Y -= adjusted.Dy() - maxY
}
if wOpts.Ratio.X > 0 && wOpts.Ratio.Y > 0 {
adjusted = area.WithRatio(adjusted, wOpts.Ratio)
}
aligned, err := alignfor.Rectangle(padded, adjusted, c.opts.hAlign, c.opts.vAlign)
if err != nil {
return image.ZR, err
}
return aligned, nil
}
// split splits the container's usable area into child areas.
// Panics if the container isn't configured for a split.
func (c *Container) split() (image.Rectangle, image.Rectangle, error) {
ar, err := c.opts.padding.apply(c.usable())
if err != nil {
return image.ZR, image.ZR, err
}
if c.opts.splitFixed > DefaultSplitFixed {
if c.opts.split == splitTypeVertical {
return area.VSplitCells(ar, c.opts.splitFixed)
}
return area.HSplitCells(ar, c.opts.splitFixed)
}
if c.opts.split == splitTypeVertical {
return area.VSplit(ar, c.opts.splitPercent)
}
return area.HSplit(ar, c.opts.splitPercent)
}
// createFirst creates and returns the first sub container of this container.
func (c *Container) createFirst(opts []Option) error {
first, err := newChild(c, opts)
if err != nil {
return err
}
c.first = first
return nil
}
// createSecond creates and returns the second sub container of this container.
func (c *Container) createSecond(opts []Option) error {
second, err := newChild(c, opts)
if err != nil {
return err
}
c.second = second
return nil
}
// Draw draws this container and all of its sub containers.
func (c *Container) Draw() error {
c.mu.Lock()
defer c.mu.Unlock()
if c.clearNeeded {
if err := c.term.Clear(); err != nil {
return fmt.Errorf("term.Clear => error: %v", err)
}
c.clearNeeded = false
}
// Update the area we are tracking for focus in case the terminal size
// changed.
ar, err := area.FromSize(c.term.Size())
if err != nil {
return err
}
c.focusTracker.updateArea(ar)
return drawTree(c)
}
// Update updates container with the specified id by setting the provided
// options. This can be used to perform dynamic layout changes, i.e. anything
// between replacing the widget in the container and completely changing the
// layout and splits.
// The argument id must match exactly one container with that was created with
// matching ID() option. The argument id must not be an empty string.
func (c *Container) Update(id string, opts ...Option) error {
c.mu.Lock()
defer c.mu.Unlock()
target, err := findID(c, id)
if err != nil {
return err
}
c.clearNeeded = true
if err := applyOptions(target, opts...); err != nil {
return err
}
if err := validateOptions(c); err != nil {
return err
}
// The currently focused container might not be reachable anymore, because
// it was under the target. If that is so, move the focus up to the target.
if !c.focusTracker.reachableFrom(c) {
c.focusTracker.setActive(target)
}
return nil
}
// updateFocus processes the mouse event and determines if it changes the
// focused container.
// Caller must hold c.mu.
func (c *Container) updateFocus(m *terminalapi.Mouse) {
target := pointCont(c, m.Position)
if target == nil { // Ignore mouse clicks where no containers are.
return
}
c.focusTracker.mouse(target, m)
}
// processEvent processes events delivered to the container.
func (c *Container) processEvent(ev terminalapi.Event) error {
// This is done in two stages.
// 1) under lock we traverse the container and identify all targets
// (widgets) that should receive the event.
// 2) lock is released and events are delivered to the widgets. Widgets
// themselves are thread-safe. Lock must be releases when delivering,
// because some widgets might try to mutate the container when they
// receive the event, like dynamically change the layout.
c.mu.Lock()
sendFn, err := c.prepareEvTargets(ev)
c.mu.Unlock()
if err != nil {
return err
}
return sendFn()
}
// prepareEvTargets returns a closure, that when called delivers the event to
// widgets that registered for it.
// Also processes the event on behalf of the container (tracks keyboard focus).
// Caller must hold c.mu.
func (c *Container) prepareEvTargets(ev terminalapi.Event) (func() error, error) {
switch e := ev.(type) {
case *terminalapi.Mouse:
c.updateFocus(ev.(*terminalapi.Mouse))
targets, err := c.mouseEvTargets(e)
if err != nil {
return nil, err
}
return func() error {
for _, mt := range targets {
if err := mt.widget.Mouse(mt.ev); err != nil {
return err
}
}
return nil
}, nil
case *terminalapi.Keyboard:
targets := c.keyEvTargets()
return func() error {
for _, w := range targets {
if err := w.Keyboard(e); err != nil {
return err
}
}
return nil
}, nil
default:
return nil, fmt.Errorf("container received an unsupported event type %T", ev)
}
}
// keyEvTargets returns those widgets found in the container that should
// receive this keyboard event.
// Caller must hold c.mu.
func (c *Container) keyEvTargets() []widgetapi.Widget {
var (
errStr string
widgets []widgetapi.Widget
)
// All the widgets that should receive this event.
// For now stable ordering (preOrder).
preOrder(c, &errStr, visitFunc(func(cur *Container) error {
if !cur.hasWidget() {
return nil
}
wOpt := cur.opts.widget.Options()
switch wOpt.WantKeyboard {
case widgetapi.KeyScopeNone:
// Widget doesn't want any keyboard events.
return nil
case widgetapi.KeyScopeFocused:
if cur.focusTracker.isActive(cur) {
widgets = append(widgets, cur.opts.widget)
}
case widgetapi.KeyScopeGlobal:
widgets = append(widgets, cur.opts.widget)
}
return nil
}))
return widgets
}
// mouseEvTarget contains a mouse event adjusted relative to the widget's area
// and the widget that should receive it.
type mouseEvTarget struct {
// widget is the widget that should receive the mouse event.
widget widgetapi.Widget
// ev is the adjusted mouse event.
ev *terminalapi.Mouse
}
// newMouseEvTarget returns a new newMouseEvTarget.
func newMouseEvTarget(w widgetapi.Widget, wArea image.Rectangle, ev *terminalapi.Mouse) *mouseEvTarget {
return &mouseEvTarget{
widget: w,
ev: adjustMouseEv(ev, wArea),
}
}
// mouseEvTargets returns those widgets found in the container that should
// receive this mouse event.
// Caller must hold c.mu.
func (c *Container) mouseEvTargets(m *terminalapi.Mouse) ([]*mouseEvTarget, error) {
var (
errStr string
widgets []*mouseEvTarget
)
// All the widgets that should receive this event.
// For now stable ordering (preOrder).
preOrder(c, &errStr, visitFunc(func(cur *Container) error {
if !cur.hasWidget() {
return nil
}
wOpts := cur.opts.widget.Options()
wa, err := cur.widgetArea()
if err != nil {
return err
}
switch wOpts.WantMouse {
case widgetapi.MouseScopeNone:
// Widget doesn't want any mouse events.
return nil
case widgetapi.MouseScopeWidget:
// Only if the event falls inside of the widget's canvas.
if m.Position.In(wa) {
widgets = append(widgets, newMouseEvTarget(cur.opts.widget, wa, m))
}
case widgetapi.MouseScopeContainer:
// Only if the event falls inside the widget's parent container.
if m.Position.In(cur.area) {
widgets = append(widgets, newMouseEvTarget(cur.opts.widget, wa, m))
}
case widgetapi.MouseScopeGlobal:
// Widget wants all mouse events.
widgets = append(widgets, newMouseEvTarget(cur.opts.widget, wa, m))
}
return nil
}))
if errStr != "" {
return nil, errors.New(errStr)
}
return widgets, nil
}
// Subscribe tells the container to subscribe itself and widgets to the
// provided event distribution system.
// This method is private to termdash, stability isn't guaranteed and changes
// won't be backward compatible.
func (c *Container) Subscribe(eds *event.DistributionSystem) {
c.mu.Lock()
defer c.mu.Unlock()
// maxReps is the maximum number of repetitive events towards widgets
// before we throttle them.
const maxReps = 10
// Subscriber the container itself in order to track keyboard focus.
want := []terminalapi.Event{
&terminalapi.Keyboard{},
&terminalapi.Mouse{},
}
eds.Subscribe(want, func(ev terminalapi.Event) {
if err := c.processEvent(ev); err != nil {
eds.Event(terminalapi.NewErrorf("failed to process event %v: %v", ev, err))
}
}, event.MaxRepetitive(maxReps))
}
// adjustMouseEv adjusts the mouse event relative to the widget area.
func adjustMouseEv(m *terminalapi.Mouse, wArea image.Rectangle) *terminalapi.Mouse {
// The sent mouse coordinate is relative to the widget canvas, i.e. zero
// based, even though the widget might not be in the top left corner on the
// terminal.
offset := wArea.Min
if m.Position.In(wArea) {
return &terminalapi.Mouse{
Position: m.Position.Sub(offset),
Button: m.Button,
}
}
return &terminalapi.Mouse{
Position: image.Point{-1, -1},
Button: m.Button,
}
}

View File

@@ -1,175 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package container
// draw.go contains logic to draw containers and the contained widgets.
import (
"errors"
"fmt"
"image"
"github.com/mum4k/termdash/cell"
"github.com/mum4k/termdash/private/area"
"github.com/mum4k/termdash/private/canvas"
"github.com/mum4k/termdash/private/draw"
"github.com/mum4k/termdash/widgetapi"
)
// drawTree draws this container and all of its sub containers.
func drawTree(c *Container) error {
var errStr string
root := rootCont(c)
size := root.term.Size()
ar, err := root.opts.margin.apply(image.Rect(0, 0, size.X, size.Y))
if err != nil {
return err
}
root.area = ar
preOrder(root, &errStr, visitFunc(func(c *Container) error {
first, second, err := c.split()
if err != nil {
return err
}
if c.first != nil {
ar, err := c.first.opts.margin.apply(first)
if err != nil {
return err
}
c.first.area = ar
}
if c.second != nil {
ar, err := c.second.opts.margin.apply(second)
if err != nil {
return err
}
c.second.area = ar
}
return drawCont(c)
}))
if errStr != "" {
return errors.New(errStr)
}
return nil
}
// drawBorder draws the border around the container if requested.
func drawBorder(c *Container) error {
if !c.hasBorder() {
return nil
}
cvs, err := canvas.New(c.area)
if err != nil {
return err
}
ar, err := area.FromSize(cvs.Size())
if err != nil {
return err
}
var cOpts []cell.Option
if c.focusTracker.isActive(c) {
cOpts = append(cOpts, cell.FgColor(c.opts.inherited.focusedColor))
} else {
cOpts = append(cOpts, cell.FgColor(c.opts.inherited.borderColor))
}
if err := draw.Border(cvs, ar,
draw.BorderLineStyle(c.opts.border),
draw.BorderTitle(c.opts.borderTitle, draw.OverrunModeThreeDot, cOpts...),
draw.BorderTitleAlign(c.opts.borderTitleHAlign),
draw.BorderCellOpts(cOpts...),
); err != nil {
return err
}
return cvs.Apply(c.term)
}
// drawWidget requests the widget to draw on the canvas.
func drawWidget(c *Container) error {
widgetArea, err := c.widgetArea()
if err != nil {
return err
}
if widgetArea == image.ZR {
return nil
}
if !c.hasWidget() {
return nil
}
needSize := image.Point{1, 1}
wOpts := c.opts.widget.Options()
if wOpts.MinimumSize.X > 0 && wOpts.MinimumSize.Y > 0 {
needSize = wOpts.MinimumSize
}
if widgetArea.Dx() < needSize.X || widgetArea.Dy() < needSize.Y {
return drawResize(c, c.usable())
}
cvs, err := canvas.New(widgetArea)
if err != nil {
return err
}
meta := &widgetapi.Meta{
Focused: c.focusTracker.isActive(c),
}
if err := c.opts.widget.Draw(cvs, meta); err != nil {
return err
}
return cvs.Apply(c.term)
}
// drawResize draws an unicode character indicating that the size is too small to draw this container.
// Does nothing if the size is smaller than one cell, leaving no space for the character.
func drawResize(c *Container, area image.Rectangle) error {
if area.Dx() < 1 || area.Dy() < 1 {
return nil
}
cvs, err := canvas.New(area)
if err != nil {
return err
}
if err := draw.ResizeNeeded(cvs); err != nil {
return err
}
return cvs.Apply(c.term)
}
// drawCont draws the container and its widget.
func drawCont(c *Container) error {
if us := c.usable(); us.Dx() <= 0 || us.Dy() <= 0 {
return drawResize(c, c.area)
}
if err := drawBorder(c); err != nil {
return fmt.Errorf("unable to draw container border: %v", err)
}
if err := drawWidget(c); err != nil {
return fmt.Errorf("unable to draw widget %T: %v", c.opts.widget, err)
}
return nil
}

View File

@@ -1,116 +0,0 @@
// Copyright 2018 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package container
// focus.go contains code that tracks the focused container.
import (
"image"
"github.com/mum4k/termdash/mouse"
"github.com/mum4k/termdash/private/button"
"github.com/mum4k/termdash/terminal/terminalapi"
)
// pointCont finds the top-most (on the screen) container whose area contains
// the given point. Returns nil if none of the containers in the tree contain
// this point.
func pointCont(c *Container, p image.Point) *Container {
var (
errStr string
cont *Container
)
postOrder(rootCont(c), &errStr, visitFunc(func(c *Container) error {
if p.In(c.area) && cont == nil {
cont = c
}
return nil
}))
return cont
}
// focusTracker tracks the active (focused) container.
// This is not thread-safe, the implementation assumes that the owner of
// focusTracker performs locking.
type focusTracker struct {
// container is the currently focused container.
container *Container
// candidate is the container that might become focused next. I.e. we got
// a mouse click and now waiting for a release or a timeout.
candidate *Container
// buttonFSM is a state machine tracking mouse clicks in containers and
// moving focus from one container to the next.
buttonFSM *button.FSM
}
// newFocusTracker returns a new focus tracker with focus set at the provided
// container.
func newFocusTracker(c *Container) *focusTracker {
return &focusTracker{
container: c,
// Mouse FSM tracking clicks inside the entire area for the root
// container.
buttonFSM: button.NewFSM(mouse.ButtonLeft, c.area),
}
}
// isActive determines if the provided container is the currently active container.
func (ft *focusTracker) isActive(c *Container) bool {
return ft.container == c
}
// setActive sets the currently active container to the one provided.
func (ft *focusTracker) setActive(c *Container) {
ft.container = c
}
// mouse identifies mouse events that change the focused container and track
// the focused container in the tree.
// The argument c is the container onto which the mouse event landed.
func (ft *focusTracker) mouse(target *Container, m *terminalapi.Mouse) {
clicked, bs := ft.buttonFSM.Event(m)
switch {
case bs == button.Down:
ft.candidate = target
case bs == button.Up && clicked:
if target == ft.candidate {
ft.container = target
}
}
}
// updateArea updates the area that the focus tracker considers active for
// mouse clicks.
func (ft *focusTracker) updateArea(ar image.Rectangle) {
ft.buttonFSM.UpdateArea(ar)
}
// reachableFrom asserts whether the currently focused container is reachable
// from the provided node in the tree.
func (ft *focusTracker) reachableFrom(node *Container) bool {
var (
errStr string
reachable bool
)
preOrder(node, &errStr, visitFunc(func(c *Container) error {
if c == ft.container {
reachable = true
}
return nil
}))
return reachable
}

Some files were not shown because too many files have changed in this diff Show More