Compare commits

...

544 Commits

Author SHA1 Message Date
Cédric Verstraeten
ccf4034cc8 Merge pull request #252 from kerberos-io/fix/close-mp4-after-started
fix/close-mp4-after-started
2026-03-03 15:21:12 +01:00
Cédric Verstraeten
a34836e8f4 Delay MP4 creation until the first keyframe is received to ensure valid recordings 2026-03-03 14:16:39 +00:00
Cédric Verstraeten
dd1464d1be Fix recording closure condition to ensure it only triggers after recording has started 2026-03-03 14:03:11 +00:00
Cédric Verstraeten
2c02e0aeb1 Merge pull request #250 from kerberos-io/fix/add-avc-description-fallback
fix/add-avc-description-fallback
2026-02-27 11:48:34 +01:00
cedricve
d5464362bb Add AVC descriptor fallback for SPS parse errors
When setting the AVC descriptor fails in MP4.Close(), attempt a fallback that constructs an AvcC/avc1 sample entry from available SPS/PPS NALUs. Adds github.com/Eyevinn/mp4ff/avc import and two helpers: addAVCDescriptorFallback (builds a visual sample entry, sets tkhd width/height if available, and inserts it into stsd) and buildAVCDecConfRecFromSPS (creates an avc.DecConfRec from SPS/PPS bytes by extracting profile/compat/level and filling defaults). Logs errors and warns when the fallback is used. This provides resilience against SPS parsing errors when writing the MP4 track descriptor.
2026-02-27 11:35:22 +01:00
Cédric Verstraeten
5bcefd0015 Merge pull request #249 from kerberos-io/feature/enhance-avc-hevc-ssp-nalus
feature/enhance-avc-hevc-ssp-nalus
2026-02-27 11:12:03 +01:00
cedricve
5bb9def42d Normalize and debug H264/H265 parameter sets
Replace direct sanitizeParameterSets usage with normalizeH264ParameterSets and normalizeH265ParameterSets in mp4.Close. The new functions split Annex-B blobs, strip start codes, detect NALU types (SPS/PPS for AVC; VPS/SPS/PPS for HEVC), aggregate distinct parameter sets and fall back to sanitizeParameterSets if none are found. Added splitParamSetNALUs and formatNaluDebug helpers and debug logging to output concise parameter-set summaries before setting AVC/HEVC descriptors. These changes improve handling of concatenated Annex-B parameter set blobs and make debugging parameter extraction easier.
2026-02-27 11:09:28 +01:00
Cédric Verstraeten
ff38ccbadf Merge pull request #248 from kerberos-io/fix/sanitize-parameter-sets
fix/sanitize-parameter-sets
2026-02-26 20:43:53 +01:00
cedricve
f64e899de9 Populate/sanitize NALUs and avoid empty MP4
Fill missing SPS/PPS/VPS from camera config before closing recordings and warn when parameter sets are incomplete (for both continuous and motion-detection flows). Sanitize parameter sets (remove Annex-B start codes and drop empty NALUs) before writing AVC/HEVC descriptors. Prevent creation of empty MP4 files by flushing/closing and removing files when no audio/video samples were added, and only add an audio track when audio samples exist.
2026-02-26 20:37:10 +01:00
Cédric Verstraeten
b8a81d18af Merge pull request #247 from kerberos-io/fix/ensure-stsd
fix/ensure-stsd
2026-02-26 17:13:45 +01:00
cedricve
8c2e3e4cdd Recover video parameter sets from Annex B NALUs
Add updateVideoParameterSetsFromAnnexB to parse Annex B NALUs and populate missing SPS/PPS/VPS for H.264/H.265 streams. Call this helper when adding video samples so in-band parameter sets can be recovered early. Also add error logging in Close() when setting AVC/HEVC descriptors fails. These changes improve robustness for streams that carry SPS/PPS/VPS inline.
2026-02-26 17:05:09 +01:00
Cédric Verstraeten
11c4ee518d Merge pull request #246 from kerberos-io/fix/handle-sps-pps-unknown-state
fix/handle-sps-pps-unknown-state
2026-02-26 16:24:54 +01:00
cedricve
51b9d76973 Improve SPS/PPS handling: add warnings for missing SPS/PPS during recording start 2026-02-26 15:24:34 +00:00
cedricve
f3c1cb9b82 Enhance SPS/PPS handling for main stream in gortsplib: add fallback for missing SDP 2026-02-26 15:21:54 +00:00
Cédric Verstraeten
a1368361e4 Merge pull request #242 from kerberos-io/fix/update-workflows-for-nightly-build
fix/update-workflows-for-nightly-build
2026-02-16 12:44:40 +01:00
Cédric Verstraeten
abfdea0179 Update issue-userstory-create.yml 2026-02-16 12:37:49 +01:00
Cédric Verstraeten
8aaeb62fa3 Merge pull request #241 from kerberos-io/fix/update-workflows-for-nightly-build
fix/update-workflows-for-nightly-build
2026-02-16 12:21:06 +01:00
Cédric Verstraeten
e30dd7d4a0 Add nightly build workflow for Docker images 2026-02-16 12:16:39 +01:00
Cédric Verstraeten
ac3f9aa4e8 Merge pull request #240 from kerberos-io/feature/add-issue-generator-workflow
feature/add-issue-generator-workflow
2026-02-16 11:58:06 +01:00
Cédric Verstraeten
04c568f488 Add workflow to create user story issues with customizable inputs 2026-02-16 11:54:07 +01:00
Cédric Verstraeten
e270223968 Merge pull request #238 from kerberos-io/fix/docker-build-release-action
fix/docker-build-release-action
2026-02-13 22:17:33 +01:00
cedricve
01ab1a9218 Disable build provenance in Docker builds
Add --provenance=false to docker build invocations in .github/workflows/release-create.yml (both default and arm64 steps) to suppress Docker provenance metadata during CI builds.
2026-02-13 22:16:23 +01:00
Cédric Verstraeten
6f0794b09c Merge pull request #237 from kerberos-io/feature/fix-quicktime-duration
feature/fix-quicktime-duration
2026-02-13 21:55:41 +01:00
cedricve
1ae6a46d88 Embed build version into binaries
Pass VERSION from CI into Docker builds and embed it into the Go binary via ldflags. Updated .github workflow to supply --build-arg VERSION for both architectures. Added ARG VERSION and logic in Dockerfile and Dockerfile.arm64 to derive the version from git (git describe --tags) or fall back to the provided build-arg, then set it with -X during go build. Changed VERSION in machinery/src/utils/main.go from a const to a var defaulting to "0.0.0" and documented that it is overridden at build time. This ensures released images contain the correct agent version while local/dev builds keep a sensible default.
2026-02-13 21:50:09 +01:00
cedricve
9d83cab5cc Set mdhd.Duration to 0 for fragmented MP4
Uncomment and explicitly set mdhd.Duration = 0 in machinery/src/video/mp4.go for relevant tracks (video H264/H265 and audio track). This ensures mdhd.Duration is zero for fragmented MP4 so players derive duration from fragments (avoiding QuickTime adding fragment durations and doubling the reported duration).
2026-02-13 21:46:32 +01:00
cedricve
6f559c2f00 Align MP4 headers to fragment durations
Compute actual video duration from SegmentDurations and ensure container headers reflect fragment durations. Set mvhd.Duration and mvex/mehd.FragmentDuration to the maximum of video (sum of segments) and audio durations so the overall mvhd matches the longest track. Use the summed segment duration for track tkhd.Duration and keep mdhd.Duration at 0 for fragmented MP4s (to avoid double-counting). Add a warning log when accumulated video duration differs from the recorded VideoTotalDuration. Harden fingerprint generation and private key handling with nil checks.

Add mp4_duration_test.go: unit test that creates a simulated H.264 fragmented MP4 (150 frames at 40ms), closes it, parses the output and verifies that mvhd/mehd and trun sample durations are consistent and that mdhd.Duration is zero.
2026-02-13 21:35:57 +01:00
cedricve
c147944f5a Convert MP4 timestamps to Mac HFS epoch
Add MacEpochOffset constant and convert mp4.StartTime to Mac HFS time for QuickTime compatibility. Compute macTime = mp4.StartTime + MacEpochOffset and use it for mvhd CreationTime/ModificationTime, as well as track tkhd and mdhd creation/modification timestamps for video and audio tracks. Also set mvhd Rate, Volume and NextTrackID. These changes ensure generated MP4s use QuickTime-compatible epoch and include proper mvhd metadata.
2026-02-13 21:01:45 +01:00
Cédric Verstraeten
e8ca776e4e Merge pull request #236 from kerberos-io/fix/debugging-lost-keyframes
fix/debugging-lost-keyframes
2026-02-11 16:51:07 +01:00
Cédric Verstraeten
de5c4b6e0a Merge branch 'master' into fix/debugging-lost-keyframes 2026-02-11 16:48:08 +01:00
Cédric Verstraeten
9ba64de090 add additional logging 2026-02-11 16:48:01 +01:00
Cédric Verstraeten
7ceeebe76e Merge pull request #235 from kerberos-io/fix/debugging-lost-keyframes
fix/debugging-lost-keyframes
2026-02-11 16:15:57 +01:00
Cédric Verstraeten
bd7dbcfcf2 Enhance FPS tracking and logging for keyframes in gortsplib and mp4 modules 2026-02-11 15:11:52 +00:00
Cédric Verstraeten
8c7a46e3ae Merge pull request #234 from kerberos-io/fix/fps-gop-size
fix/fps-gop-size
2026-02-11 15:05:31 +01:00
Cédric Verstraeten
57ccfaabf5 Merge branch 'fix/fps-gop-size' of github.com:kerberos-io/agent into fix/fps-gop-size 2026-02-11 14:59:34 +01:00
Cédric Verstraeten
4a9cb51e95 Update machinery/src/capture/gortsplib.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-11 14:59:15 +01:00
Cédric Verstraeten
ab6f621e76 Merge branch 'fix/fps-gop-size' of github.com:kerberos-io/agent into fix/fps-gop-size 2026-02-11 14:58:44 +01:00
Cédric Verstraeten
c365ae5af2 Ensure thread-safe closure of peer connections in InitializeWebRTCConnection 2026-02-11 13:58:29 +00:00
Cédric Verstraeten
b05c3d1baa Update machinery/src/capture/gortsplib.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-11 14:52:40 +01:00
Cédric Verstraeten
c7c7203fad Merge branch 'master' into fix/fps-gop-size 2026-02-11 14:48:05 +01:00
Cédric Verstraeten
d93f85b4f3 Refactor FPS calculation to use per-stream trackers for improved accuracy 2026-02-11 13:45:07 +00:00
Cédric Verstraeten
031212b98c Merge pull request #232 from kerberos-io/fix/fps-gop-size
fix/fps-gop-size
2026-02-11 14:27:18 +01:00
Cédric Verstraeten
a4837b3cb3 Implement PTS-based FPS calculation and GOP size adjustments 2026-02-11 13:14:29 +00:00
Cédric Verstraeten
77629ac9b8 Merge pull request #231 from kerberos-io/feature/improve-keyframe-interval
feature/improve-keyframe-interval
2026-02-11 12:28:33 +01:00
cedricve
59608394af Use Warning instead of Warn in mp4.go
Replace call to log.Log.Warn with log.Log.Warning in MP4.flushPendingVideoSample to match the logger API. This is a non-functional change that preserves the original message and behavior while using the correct logging method name.
2026-02-11 12:26:18 +01:00
cedricve
9dfcaa466f Refactor video sample flushing logic into a dedicated function 2026-02-11 11:48:15 +01:00
cedricve
88442e4525 Add pending video sample to segment before flush
Before flushing a segment when mp4.Start is true, add any pending VideoFullSample for the current video track to the current fragment. The change computes and updates LastVideoSampleDTS and VideoTotalDuration, adjusts the sample DecodeTime and Dur, calls AddFullSampleToTrack, logs errors, and clears VideoFullSample so the pending sample is included in the segment before starting a new one. This ensures segments contain all frames up to (but not including) the keyframe that triggered the flush.
2026-02-11 11:38:51 +01:00
Cédric Verstraeten
891ae2e5d5 Merge pull request #230 from kerberos-io/feature/improve-video-format
feature/improve-video-format
2026-02-10 17:25:23 +01:00
Cédric Verstraeten
32b471f570 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:40 +01:00
Cédric Verstraeten
5d745fc989 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:29 +01:00
Cédric Verstraeten
edfa6ec4c6 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:16 +01:00
Cédric Verstraeten
0c460efea6 Refactor PR description workflow to include organization variable and correct pull request URL format 2026-02-10 16:17:10 +00:00
Cédric Verstraeten
96df049e59 Enhance MP4 initialization by adding max recording duration parameter, improving placeholder size calculation for segments. 2026-02-10 15:59:59 +00:00
Cédric Verstraeten
2cb454e618 Merge branch 'master' into feature/improve-video-format 2026-02-10 16:57:47 +01:00
Cédric Verstraeten
7f2ebb655e Fix sidx.FirstOffset calculation and re-encode init segment for accurate MP4 structure 2026-02-10 15:56:10 +00:00
Cédric Verstraeten
63857fb5cc Merge pull request #229 from kerberos-io/feature/improve-video-format
feature/improve-video-format
2026-02-10 16:53:34 +01:00
Cédric Verstraeten
f4c75f9aa9 Add environment variables for PR number and project name in workflow 2026-02-10 15:31:37 +00:00
Cédric Verstraeten
c3936dc884 Enhance MP4 segment handling by adding segment durations and base decode times, improving fragment management and data integrity 2026-02-10 14:47:47 +00:00
Cédric Verstraeten
2868ddc499 Add fragment duration handling and improve MP4 segment management 2026-02-10 13:52:58 +00:00
Cédric Verstraeten
176610a694 Update mp4.go 2026-02-10 13:39:55 +01:00
Cédric Verstraeten
f60aff4fd6 Enhance MP4 closing process by adding final video and audio samples, ensuring data integrity and updating track metadata 2026-02-10 12:45:46 +01:00
Cédric Verstraeten
847f62303a Merge pull request #228 from kerberos-io/feature/improve-webrtc-tracing
feature/improve-webrtc-tracing
2026-01-23 15:22:45 +01:00
Cédric Verstraeten
f174e2697e Enhance WebRTC handling with connection management and error logging improvements 2026-01-23 14:16:55 +00:00
Cédric Verstraeten
acac2d5d42 Refactor main function to improve code structure and readability 2026-01-23 13:48:24 +00:00
Cédric Verstraeten
f304c2ed3e Merge pull request #219 from kerberos-io/fix/release-process
fix/release-process
2025-09-17 16:32:58 +02:00
cedricve
2003a38cdc Add release creation workflow with multi-arch Docker builds and artifact handling 2025-09-17 14:32:06 +00:00
Cédric Verstraeten
a67c5a1f39 Merge pull request #216 from kerberos-io/feature/upgrade-build-process-avoid-base
feature/upgrade-build-process-avoid-base
2025-09-11 16:22:53 +02:00
Cédric Verstraeten
b7a87f95e5 Update Docker workflow to use Ubuntu 24.04 and simplify build steps for multi-arch images 2025-09-11 15:00:37 +02:00
Cédric Verstraeten
0aa0b8ad8f Refactor build steps in PR workflow to streamline Docker operations and improve artifact handling 2025-09-11 14:09:22 +02:00
Cédric Verstraeten
2bff868de6 Update upload artifact action to v4 in PR build workflow 2025-09-11 13:45:34 +02:00
Cédric Verstraeten
8b59828126 Add steps to strip binary and upload artifact in PR build workflow 2025-09-11 13:39:27 +02:00
Cédric Verstraeten
f55e25db07 Remove Golang build steps from Dockerfiles for amd64 and arm64 2025-09-11 10:29:05 +02:00
Cédric Verstraeten
243c969666 Add missing go version check in Dockerfile build step 2025-09-11 10:26:54 +02:00
Cédric Verstraeten
ec7f2e0303 Update ARM64 build step to specify Dockerfile for architecture 2025-09-11 10:18:19 +02:00
Cédric Verstraeten
a4a032d994 Update GitHub Actions workflow and Dockerfiles for architecture support and dependency management 2025-09-11 10:17:51 +02:00
Cédric Verstraeten
0a84744e49 Remove arm-v6 architecture from build matrix in PR workflow 2025-09-09 14:38:51 +00:00
Cédric Verstraeten
1425430376 Update .gitignore to include __debug* and change Dockerfile base image to golang:1.24.5-bullseye 2025-09-09 14:36:32 +00:00
Cédric Verstraeten
ca8d88ffce Update GitHub Actions workflow to support multiple architectures in build matrix 2025-09-09 14:34:39 +00:00
Cédric Verstraeten
af3f8bb639 Add GitHub Actions workflow for pull request builds and update Dockerfile dependencies 2025-09-09 16:28:19 +02:00
Cédric Verstraeten
1f9772d472 Merge pull request #212 from kerberos-io/fix/ovrride-base-width
fix/ovrride-base-width
2025-08-12 07:05:43 +02:00
cedricve
94cf361b55 Reset baseWidth and baseHeight in StoreConfig function 2025-08-12 04:47:50 +00:00
cedricve
6acdf258e7 Fix typo in environment variable override function name 2025-08-11 21:10:33 +00:00
cedricve
cc0a810ab3 Handle both baseWidth and baseHeight in IPCamera config
Adds logic to set IPCamera BaseWidth and BaseHeight when both values are provided, instead of only calculating aspect ratio. Also fixes a typo in the function call to override configuration with environment variables.
2025-08-11 23:06:24 +02:00
Cédric Verstraeten
c19bfbe552 Merge pull request #211 from kerberos-io/feature/minimize-sd-view-image
feature/minimize-sd-view-image
2025-08-11 12:30:01 +02:00
Cédric Verstraeten
39aaf5ad6c Merge branch 'feature/minimize-sd-view-image' of github.com:kerberos-io/agent into feature/minimize-sd-view-image 2025-08-11 10:25:31 +00:00
Cédric Verstraeten
6fba2ff05d Refactor logging in gortsplib and mp4 modules to use Debug and Error levels; update free box size in MP4 initialization 2025-08-11 10:20:37 +00:00
Cédric Verstraeten
d78e682759 Update config.json 2025-08-11 11:39:45 +02:00
Cédric Verstraeten
ed582a9d57 Resize polygon coordinates based on IPCamera BaseWidth and BaseHeight configuration 2025-08-11 09:38:24 +00:00
Cédric Verstraeten
aa925d5c9b Add BaseWidth and BaseHeight configuration options for IPCamera; update resizing logic in RunAgent and websocket handlers 2025-08-11 09:23:11 +00:00
Cédric Verstraeten
08d191e542 Update image resizing to support dynamic height; modify related functions and configurations 2025-08-11 08:08:39 +00:00
Cédric Verstraeten
cc075d7237 Refactor IPCamera configuration to include BaseWidth and BaseHeight; update image resizing logic to use dynamic width based on configuration 2025-08-06 14:42:23 +00:00
Cédric Verstraeten
1974bddfbe Merge pull request #210 from kerberos-io/feature/minimize-sd-view-image
feature/minimize-sd-view-image
2025-07-30 15:42:06 +02:00
Cédric Verstraeten
12cb88e1c1 Replace fmt.Println with log.Log.Debug for buffer size in ImageToBytes function 2025-07-30 13:34:14 +00:00
Cédric Verstraeten
c054526998 Add image resizing functionality and update dependencies
- Introduced ResizeImage function to resize images before encoding.
- Updated ImageToBytes function to accept pointer to image.
- Added nfnt/resize library for image resizing.
- Updated go.mod and go.sum to include new dependencies.
- Updated image processing in HandleLiveStreamSD, GetSnapshotRaw, and other functions to use resized images.
- Updated yarn.lock for ui package version change.
2025-07-30 12:06:12 +00:00
Cédric Verstraeten
ffa97598b8 Merge pull request #208 from kerberos-io/feature/increase-chunk-size
feature/increase-chunk-size
2025-07-14 10:07:43 +02:00
cedricve
f5afbf3a63 Add sleep intervals in HandleLiveStreamSD to prevent MQTT flooding 2025-07-14 08:01:35 +00:00
cedricve
e666695c96 Disable live view chunking in configuration and adjust HandleLiveStreamSD function accordingly 2025-07-14 07:59:04 +00:00
Cédric Verstraeten
55816e4b7b Merge pull request #207 from kerberos-io/feature/increase-chunk-size
feature/increase-chunk-size
2025-07-13 22:34:20 +02:00
cedricve
016fb51951 Increase chunk size for live stream handling from 2KB to 25KB 2025-07-13 20:28:32 +00:00
Cédric Verstraeten
550a444650 Merge pull request #206 from kerberos-io/feature/configurable-chunking
feature/configurable-chunking
2025-07-13 22:15:55 +02:00
Cédric Verstraeten
4332e43f27 Update machinery/src/cloud/Cloud.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-13 22:11:49 +02:00
cedricve
fdc3bfb4a4 Add live view chunking configuration to capture settings 2025-07-13 19:47:07 +00:00
cedricve
c17d6b7117 Implement live view chunking configuration for HandleLiveStreamSD function 2025-07-13 19:34:00 +00:00
cedricve
5d7a8103c0 Add Liveview chunking configuration and update WebRTC SDP handling 2025-07-13 19:33:13 +00:00
Cédric Verstraeten
5d7cb98b8f Merge pull request #205 from kerberos-io/feature/upgrade-version
Update main.go
2025-07-13 20:48:58 +02:00
Cédric Verstraeten
f6046c6a6c Update main.go 2025-07-13 20:48:45 +02:00
Cédric Verstraeten
f59f9d71a9 Merge pull request #204 from kerberos-io/feature/jpeg-resolution-chunking
feature/jpeg-resolution-chunking
2025-07-13 20:46:03 +02:00
cedricve
ff72f9647d Update chunk size definition in HandleLiveStreamSD for clarity 2025-07-13 18:21:22 +00:00
cedricve
fa604b16cf Enhance MQTT message structure and logging: add version field to Payload and improve chunked image handling in HandleLiveStreamSD 2025-07-13 16:35:06 +00:00
Cédric Verstraeten
0342869733 Merge pull request #200 from kerberos-io/fix/continue-on-wrong-start-time
fix/continue-on-wrong-start-time
2025-07-05 20:34:31 +02:00
cedricve
8685ce31a2 Add logging for zero startRecording state in HandleRecordStream 2025-07-05 18:31:35 +00:00
Cédric Verstraeten
0e259f0e7a Merge pull request #199 from kerberos-io/feature/new-method-to-calc-pre-recording-start-time
Feature/new method to calc pre recording start time
2025-07-05 17:08:38 +02:00
cedricve
5823abed95 Remove unused DTS extraction code and video stream handling in HandleRecordStream 2025-07-05 15:05:22 +00:00
cedricve
86acff58f0 Refactor HandleRecordStream to improve recording timestamp management and ensure accurate handling of startRecording and motion detection logic 2025-07-05 14:56:24 +00:00
cedricve
d3fc5d4c29 Enhance max recording period calculation in HandleRecordStream to ensure it accommodates preRecording and postRecording values correctly 2025-07-05 14:39:48 +00:00
cedricve
50bb40938c Adjust max recording period checks in HandleRecordStream for improved timing accuracy 2025-07-05 14:32:05 +00:00
cedricve
1977d98ad9 Add CurrentTime field to Packet struct and update HandleRecordStream to use it 2025-07-05 14:24:52 +00:00
Cédric Verstraeten
448d4a946d Merge pull request #198 from kerberos-io/feature/fix-prerecording-duraiton
feature/fix-prerecording-duration
2025-07-04 16:57:01 +02:00
Cédric Verstraeten
61ac314bb7 Fix pre-recording time calculation logic in HandleRecordStream to handle initial recording case correctly 2025-07-04 14:44:13 +00:00
Cédric Verstraeten
c1b144ca28 Fix pre-recording time calculation by adjusting queued packets handling in HandleRecordStream 2025-07-04 14:37:22 +00:00
Cédric Verstraeten
e16987bf9d Refactor HandleRecordStream to improve pre-recording time calculation and adjust display time logic based on available queued packets. 2025-07-04 11:18:46 +00:00
Cédric Verstraeten
9991597984 Merge pull request #197 from kerberos-io/feature/add-duration-to-recordings
feature/add-duration-to-recordings
2025-07-04 09:18:07 +02:00
cedricve
2c0314cea4 Refactor HandleRecordStream to improve file renaming logic and enhance motion detection handling 2025-07-04 06:23:09 +00:00
cedricve
0584e52b98 Refactor HandleRecordStream to optimize pre-recording time calculation and streamline video stream handling 2025-07-03 20:34:18 +00:00
cedricve
1fc90eaee2 Refactor pre-recording time calculation and improve display time logic for better recording accuracy 2025-07-03 20:04:00 +00:00
cedricve
aef3eacbc9 Enhance pre-recording time calculation by incorporating GOP size and FPS; adjust display time and recording conditions based on pre-recording delta. 2025-07-03 17:51:46 +00:00
cedricve
2843568473 Refactor GOP size handling and enhance queue management for improved recording performance 2025-07-03 17:31:37 +00:00
Cédric Verstraeten
53ffc8cae0 Add GOP size configuration and enhance pre-recording handling for improved stream management 2025-07-02 13:28:02 +00:00
Cédric Verstraeten
86e654fe19 Add GOP size tracking and keyframe interval management for improved video processing 2025-07-02 10:51:23 +00:00
Cédric Verstraeten
46d57f7664 Enhance FPS calculation by adding timestamp-based averaging and improved SPS handling; implement debug logging for SPS information. 2025-07-02 09:53:47 +00:00
Cédric Verstraeten
963d8672eb Enhance recording process by adding display time calculation and logging for better tracking; add error handling for MP4 file creation when no samples are present. 2025-07-02 08:54:34 +00:00
Cédric Verstraeten
9b7a62816a Update mp4.go 2025-07-02 09:54:12 +02:00
Cédric Verstraeten
237134fe0e Update recording filename generation to include duration and motion rectangle for improved clarity 2025-07-01 15:03:01 +00:00
Cédric Verstraeten
c8730e8f26 Enhance recording filename generation to include motion rectangle and duration for improved clarity and uniqueness 2025-07-01 12:54:52 +00:00
Cédric Verstraeten
acbbe8b444 Enhance recording filename generation to include milliseconds and its length for improved uniqueness 2025-07-01 12:48:34 +00:00
Cédric Verstraeten
f690016aa5 Refactor motion detection to include motion rectangle and update logging levels for sample addition in MP4 track 2025-07-01 12:37:44 +00:00
Cédric Verstraeten
396cfe5d8b Merge pull request #191 from kerberos-io/feature/migrate-to--mp4ff
feature/Add MP4 video handling and update IPCamera configuration
2025-06-24 13:39:56 +02:00
Cédric Verstraeten
39fe640ccf Refactor logging in AddSampleToTrack method to use structured logging 2025-06-23 10:21:02 +00:00
Cédric Verstraeten
d389c9b0b6 Add logging for sample addition in MP4 track 2025-06-23 10:07:30 +00:00
Cédric Verstraeten
b149686db8 Remove Bento4 build steps and clean up Dockerfile structure 2025-06-23 09:57:04 +00:00
Cédric Verstraeten
c4358cbfad Fix typo in IPCamera struct: update VPSNALUs field JSON tag from "pps_nalus" to "vps_nalus" 2025-06-23 09:03:00 +00:00
Cédric Verstraeten
cfc5bd3dfe Remove unused audio stream retrieval in HandleRecordStream function 2025-06-23 07:58:39 +00:00
Cédric Verstraeten
c29c1b6a92 Merge branch 'master' into feature/migrate-to--mp4ff 2025-06-23 09:55:31 +02:00
Cédric Verstraeten
0f45a2a4b4 Merge branch 'feature/migrate-to--mp4ff' of github.com:kerberos-io/agent into feature/migrate-to--mp4ff 2025-06-23 09:54:41 +02:00
Cédric Verstraeten
92edcc13c0 Refactor OpenTelemetry tracing integration in RTSP client and components for improved context handling 2025-06-23 07:54:34 +00:00
cedricve
5392e2ba90 Update Dockerfile to remove incorrect source path and add Bento4 build process 2025-06-22 19:46:03 +00:00
cedricve
79e1f659c7 Update mongo-driver dependency from v1.17.4 to v1.17.3 to maintain compatibility 2025-06-21 20:13:38 +00:00
cedricve
bf35e5efb6 Implement OpenTelemetry tracing in the agent
- Added OpenTelemetry tracing support in main.go, including a new function startTracing to initialize the tracer with a configurable endpoint.
- Updated the environment attribute from "testing" to "develop" for better clarity in tracing.
- Integrated tracing into the RTSP connection process in gortsplib.go by creating a span for the Connect method.
- Enhanced the Bootstrap function in Kerberos.go to include tracing, marking the start and end of the bootstrap process.
- Introduced a new span in RunAgent to trace the execution flow and ensure proper span management.
2025-06-20 09:35:13 +00:00
Cédric Verstraeten
c50137f255 Comment out OpenTelemetry tracing initialization in main.go to simplify the codebase and remove unused functionality. 2025-06-16 10:30:02 +00:00
Cédric Verstraeten
f12da749b2 Remove OpenTelemetry tracing code from main.go and Kerberos.go files to simplify the codebase and eliminate unused dependencies. 2025-06-16 10:08:55 +00:00
Cédric Verstraeten
a166083423 Update machinery/src/packets/stream.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-16 10:20:43 +02:00
Cédric Verstraeten
b400d4e773 Refactor Dockerfile build commands to streamline Go build process and improve clarity 2025-06-16 06:42:08 +00:00
Cédric Verstraeten
120054d3e5 Add SampleRate and Channels fields to IPCamera configuration and update audio stream handling 2025-06-16 06:37:19 +00:00
cedricve
620117c31b Refactor WriteToTrack to use updated PacketTimestamp for video and audio samples, improving synchronization accuracy. 2025-06-07 22:12:15 +00:00
cedricve
4e371488c1 Remove unnecessary copy of mp4fragment in Dockerfile, streamlining the agent setup process. 2025-06-07 21:22:49 +00:00
cedricve
b154b56308 Refactor Dockerfile to remove CGO_ENABLED=0 from build command, simplifying the build process for the agent. 2025-06-07 21:17:25 +00:00
cedricve
6d92817237 Refactor HandleRecordStream to adjust maxRecordingPeriod calculation for improved timing accuracy. Simplify mp4 segment encoding logic to ensure it always attempts to encode the last segment, enhancing error handling. 2025-06-07 12:30:42 +00:00
cedricve
b8c1855830 Refactor HandleRecordStream to use milliseconds for timing calculations, improving accuracy in recording periods and motion detection logic. Update mp4 encoding to ensure segment encoding only occurs if a segment exists, preventing potential panics. 2025-06-07 11:53:03 +00:00
cedricve
a9f7ff4b72 Refactor HandleRecordStream to remove unused mp4.Movmuxer and streamline video sample handling with mp4Video, enhancing recording process and error logging. 2025-06-07 06:26:23 +00:00
Cédric Verstraeten
b3cd080e14 Refactor Dockerfile and main.go to enhance build process and streamline video handling 2025-06-06 15:14:45 +00:00
Cédric Verstraeten
bfde87f888 Refactor WriteToTrack to improve sample handling by using last processed audio and video samples, enhancing buffer duration calculation and streamlining packet processing. 2025-06-06 14:36:19 +00:00
Cédric Verstraeten
c4453bb8b3 Fix packet handling in WriteToTrack to ensure proper processing of next packets on timeout and empty data 2025-06-06 13:36:30 +00:00
Cédric Verstraeten
40f65a30b3 Clarify audio transcoding process in WriteToTrack with detailed comments on AAC to PCM_MULAW conversion 2025-06-06 13:33:28 +00:00
Cédric Verstraeten
5361de63e0 Refactor packet handling in WriteToTrack to improve buffer duration calculation and streamline packet reading 2025-06-06 13:23:09 +00:00
Cédric Verstraeten
3a8552d362 Enhance MP4 handling by updating track IDs in fragment creation, improving H264 and H265 NAL unit conversion, and adding support for HVC1 compatible brands in the ftyp box 2025-06-05 14:48:19 +00:00
Cédric Verstraeten
d3840103fc Add VPS NALUs support in IPCamera configuration and MP4 handling for improved video processing 2025-06-05 13:28:10 +00:00
Cédric Verstraeten
d12a9f0612 Refactor MP4 handling by simplifying Close method and adding last sample DTS tracking for better audio and video sample management 2025-06-05 10:59:44 +00:00
cedricve
c0d74f7e09 Remove placeholder comments from AddSampleToTrack and Close methods for cleaner code 2025-06-04 19:23:48 +00:00
cedricve
8ebea9e4c5 Refactor MP4 struct by removing unused video and audio fragment fields, and enhance track handling in Close method for better audio and subtitle track management 2025-06-04 19:03:58 +00:00
cedricve
89269caf92 Refactor AddSampleToTrack and SplitAACFrame methods to enhance audio sample handling and improve error logging 2025-06-04 18:36:00 +00:00
Cédric Verstraeten
0c83170f51 Fix AAC descriptor index in Close method to ensure correct audio track setup 2025-06-04 13:15:08 +00:00
Cédric Verstraeten
6081cb4be9 Update mp4.go 2025-06-04 14:39:44 +02:00
Cédric Verstraeten
ea1dbb3087 Refactor AddSampleToTrack method to improve AAC frame handling by splitting frames and updating duration calculations for audio samples 2025-06-04 09:49:29 +00:00
Cédric Verstraeten
0523208d36 Update mp4.go 2025-06-04 11:28:16 +02:00
Cédric Verstraeten
919f21b48b Refactor AddSampleToTrack method to create separate video and audio fragments, enhancing sample handling and improving error logging for AAC frames 2025-06-04 08:45:54 +00:00
cedricve
2c1c10a2ac Refactor AddSampleToTrack and Close methods to improve sample handling and track management for video and audio 2025-06-03 20:33:00 +00:00
cedricve
7e3320b252 Refactor AddSampleToTrack method to remove duration parameter and enhance fragment handling for video and audio tracks 2025-06-03 19:18:16 +00:00
Cédric Verstraeten
35ccac8b65 Refactor MP4 fragment handling in AddSampleToTrack method to separate video and audio fragments for improved track management 2025-06-03 13:29:36 +00:00
Cédric Verstraeten
dad8165d11 Enhance sample handling in AddSampleToTrack method to support multiple packets and improve error logging 2025-06-03 12:30:03 +00:00
Cédric Verstraeten
ba54188de2 Refactor video and audio track handling in MP4 structure to store track names and return track IDs for better management 2025-06-03 10:23:14 +00:00
cedricve
3b440c9905 Add audio and video codec detection in HandleRecordStream function 2025-06-03 06:27:25 +00:00
cedricve
42b98b7f20 Update mp4.go 2025-06-03 08:25:51 +02:00
cedricve
ba3312b57c Refactor AddSampleToTrack method to return error instead of panicking for better error handling 2025-06-03 05:55:23 +00:00
cedricve
223ba255e9 Fix signature handling in MP4 closing logic to ensure valid signatures are used for fingerprint 2025-06-02 17:45:05 +00:00
Cédric Verstraeten
a1df2be207 Implement signing feature with default private key configuration and update MP4 closing logic to include fingerprint signing 2025-06-02 16:02:06 +00:00
Cédric Verstraeten
d7f225ca73 Add signing configuration placeholder to the agent's config 2025-06-02 14:08:47 +00:00
Cédric Verstraeten
b3cfabb5df Update signing configuration to use private key for recording validation 2025-06-02 14:06:16 +00:00
Cédric Verstraeten
5310dd4550 Add signing configuration options to the agent 2025-06-02 13:50:48 +00:00
Cédric Verstraeten
cde7dbb58a Add configuration options for signing recordings and public key usage 2025-06-02 13:41:15 +00:00
Cédric Verstraeten
65e68231c7 Refactor MP4 handling in capture and video modules
- Updated the HandleRecordStream function to use TimeLegacy for packet timestamps instead of the previous Time conversion method.
- Modified the MP4 struct to replace InitSegment with a list of MediaSegments, allowing for better management of segments.
- Introduced StartTime to the MP4 struct to track the creation time of the MP4 file.
- Enhanced the Close method in the MP4 struct to properly handle segment indexing (SIDX) and ensure accurate duration calculations.
- Implemented helper functions to fill SIDX boxes and find segment data, improving the overall structure and readability of the code.
2025-06-02 12:27:22 +00:00
Cédric Verstraeten
5502555869 Integrate OpenTelemetry tracing in main and components, enhancing observability 2025-06-02 07:30:49 +00:00
cedricve
ad6e7e752f Refactor MP4 handling to remove commented-out track additions and enhance moov box management 2025-06-02 07:15:24 +00:00
cedricve
63af4660ef Refactor MP4 initialization and closing logic to improve segment handling and add custom UUID support 2025-06-01 20:07:36 +00:00
cedricve
24fc340001 Refactor MP4 initialization and sample addition logic to enhance duration handling and segment management 2025-05-30 19:06:56 +00:00
cedricve
78d786b69d Add custom UUID box and enhance MP4 file closing logic 2025-05-29 10:14:43 +00:00
cedricve
756aeaa0eb Refactor MP4 handling to improve sample addition and duration calculation 2025-05-28 18:36:34 +00:00
cedricve
055fb67d7a Update mp4.go 2025-05-26 21:59:23 +02:00
cedricve
bee522a6bf Refactor MP4 handling to improve sample addition and segment management 2025-05-26 06:00:17 +00:00
Cédric Verstraeten
3fbf59c622 Merge pull request #192 from kerberos-io/fix/do-not-add-aac-track
fix/add audio codec handling in HandleRecordStream function
2025-05-22 21:07:28 +02:00
cedricve
abd8b8b605 Add audio codec handling in HandleRecordStream function 2025-05-22 18:33:13 +00:00
cedricve
abdad47bf3 Add MP4 video handling and update IPCamera configuration
- Introduced a new video package with MP4 struct for video file handling.
- Updated IPCamera struct to include SPS and PPS NALUs.
- Enhanced stream handling in the capture process to utilize the new MP4 library.
- Added stream index management for better tracking of video and audio streams.
2025-05-22 05:53:33 +00:00
Cédric Verstraeten
d2c24edf5d Merge pull request #190 from kerberos-io/feature/update-workflow-do-not-push-to-latest
Update Docker build workflow to use input tag for image naming
2025-05-20 16:05:04 +02:00
Cédric Verstraeten
22f4a7f119 Update Docker build workflow to use input tag for image naming 2025-05-20 14:03:44 +00:00
Cédric Verstraeten
a25d3d32e4 Merge pull request #189 from kerberos-io/feature/allow-release-workflow-to-triggered-manually
feature/Enhance release workflow to include tag input for Docker image
2025-05-20 14:46:26 +02:00
Cédric Verstraeten
ed68c32e04 Enhance release workflow to include tag input for Docker image 2025-05-20 12:45:52 +00:00
Cédric Verstraeten
4114b3839a Merge pull request #187 from kerberos-io/upgrade/base-image
Update base image version in Dockerfile
2025-05-19 15:22:36 +02:00
Cédric Verstraeten
3f73c009fd Update Dockerfile
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-19 15:15:33 +02:00
Cédric Verstraeten
02fb70c76e Update base image version in Dockerfile 2025-05-19 14:52:28 +02:00
Cédric Verstraeten
aaddcb854d Merge pull request #185 from kerberos-io/feature/retry-windows-secondary-vault
Feature/retry windows secondary vault
2025-05-17 21:40:58 +02:00
cedricve
e73c7a6ecc Remove kstorageRetryPolicy from configuration 2025-05-17 19:37:07 +00:00
cedricve
1dc2202f37 Enhance logging for secondary Kerberos Vault upload process 2025-05-17 19:29:35 +00:00
cedricve
ac710ae1f5 Fix typo in Kerberos Vault max retries translation key 2025-05-17 19:16:27 +00:00
cedricve
f5ea82ff03 Add Kerberos Vault settings for max retries and timeout configuration 2025-05-17 19:14:02 +00:00
cedricve
ef52325240 Update Kerberos Vault configuration for max retries and timeout; adjust upload delay 2025-05-17 08:37:40 +00:00
cedricve
354855feb1 Refactor Kerberos Vault configuration for retry policy consistency 2025-05-17 08:23:32 +00:00
cedricve
c4cd25b588 Add Kerberos Vault configuration options and retry policy support 2025-05-17 08:21:28 +00:00
cedricve
dbb870229e Update config.json 2025-05-16 19:00:33 +02:00
cedricve
a66fe8c054 Merge branch 'master' into feature/retry-windows-secondary-vault 2025-05-16 19:00:13 +02:00
Cédric Verstraeten
2352431c79 Merge pull request #184 from kerberos-io/upgrade/gortsplib
upgrade/dependencies
2025-05-16 18:54:45 +02:00
cedricve
49bc168812 Refactor code structure for improved readability and maintainability 2025-05-16 15:53:40 +00:00
cedricve
98f1ebf20a Add retry policy for Kerberos Vault uploads and update configuration model 2025-05-16 15:50:59 +00:00
cedricve
65feb6d182 Add initial configuration file for agent settings 2025-05-15 12:20:04 +00:00
cedricve
58555d352f Update .gitignore and launch.json to reference .env.local instead of .env 2025-05-15 10:42:01 +00:00
Cédric Verstraeten
839a177cf0 Merge branch 'master' into feature/retry-windows-secondary-vault 2025-05-14 14:57:53 +02:00
Cédric Verstraeten
404517ec40 Merge pull request #183 from kerberos-io/cedricve-patch-1
Create .env
2025-05-14 14:56:46 +02:00
Cédric Verstraeten
035bd18bc2 Create .env 2025-05-14 14:56:31 +02:00
Cédric Verstraeten
8bf7a0d244 Update devcontainer.json 2025-05-14 14:53:41 +02:00
Cédric Verstraeten
607d8fd0d1 Merge pull request #182 from kerberos-io/feature/retry-windows-secondary-vault
Remove .env + config file, we will manually add as these are part of the .gitignore
2025-05-14 14:52:15 +02:00
Cédric Verstraeten
12807e289c remove .env + config file, we will manually add as these are part of the .gitignore 2025-05-14 14:36:16 +02:00
Cédric Verstraeten
3a984f1c73 Merge pull request #180 from kerberos-io/fix/merge-secondary-kerberos-vault-settings
Add support for secondary Kerberos Vault settings in configuration
2025-04-27 21:23:55 +02:00
cedricve
b84e34da06 Add support for secondary Kerberos Vault settings in configuration 2025-04-27 21:21:00 +02:00
Cédric Verstraeten
541d151570 Merge pull request #179 from kerberos-io/fix/omit-blank-kstorage
Make KStorage fields optional in JSON and BSON serialization
2025-04-27 20:53:18 +02:00
cedricve
4ad97e1286 Make KStorage fields optional in JSON and BSON serialization 2025-04-27 18:46:14 +00:00
Kilian
a80b375e89 Update README.md 2025-04-25 13:04:28 +02:00
Cédric Verstraeten
91cb390f6e Merge pull request #178 from kerberos-io/fix/secondary-vault-initialization
Fix/Add kstorage_secondary configuration field and initialize in environment vars
2025-04-24 11:59:39 +02:00
Cédric Verstraeten
90780dae28 Add kstorage_secondary configuration field and initialize in environment variable overrides 2025-04-24 09:56:53 +00:00
Cédric Verstraeten
ddb08e90e1 Merge pull request #176 from kerberos-io/feature/add-secondary-kerberos-vault
Add secondary KStorage for fallback (hybrid scenario)
2025-04-24 11:23:14 +02:00
Cédric Verstraeten
0d95026819 Add loading state for secondary persistence verification in Settings UI 2025-04-24 08:58:33 +00:00
Cédric Verstraeten
79db3a9dfe Add support for secondary Kerberos Vault configuration in environment variable overrides 2025-04-24 07:20:12 +00:00
cedricve
9f63ffd540 Add secondary persistence verification and UI integration 2025-04-23 20:30:11 +00:00
cedricve
9c7116a462 Add secondary persistence verification and UI integration 2025-04-23 15:24:24 +00:00
cedricve
dd9b4d43ac Update development API URLs to use port 8080 2025-04-23 16:19:56 +02:00
Cédric Verstraeten
aa63eca24c Add persistence configuration inputs for Kerberos Vault in Settings 2025-04-23 12:46:43 +00:00
Cédric Verstraeten
6df97171d9 Merge pull request #177 from kerberos-io/fix/increase-channel-size-for-audio-motion-hdhandshake
Fix/ Increase channel buffer sizes for communication handling
2025-04-23 13:24:52 +02:00
Cédric Verstraeten
56f7d69b3d Increase channel buffer sizes for communication handling 2025-04-23 11:05:16 +00:00
Cédric Verstraeten
3e2b29284e Add secondary KStorage configuration to the Config struct 2025-04-18 12:11:37 +00:00
Cédric Verstraeten
18ceca7510 Merge pull request #175 from kerberos-io/fix/remote-region-not-properly-calculated
Fix/Remote config / region not properly calculated
2025-04-18 14:06:37 +02:00
Cédric Verstraeten
5a08d1f3de Update main.go 2025-04-18 14:03:23 +02:00
Cédric Verstraeten
18af6db00c Update base image in Dockerfile to version af04230 2025-04-15 13:17:07 +00:00
Cédric Verstraeten
6d170c8dc0 change to version 1.24 + change workflow name 2025-04-15 15:00:51 +02:00
Cédric Verstraeten
9c4c3c654d Update main.go 2025-04-15 14:23:56 +02:00
Cédric Verstraeten
6952e387f4 Merge pull request #172 from kerberos-io/improvement/cleanup-and-refactors
Enhancement / Cleanup and refactoring of documentation
2025-04-15 14:15:57 +02:00
Cédric Verstraeten
66c9ae5c27 Merge pull request #174 from kerberos-io/feature/webrtc-handle-nacks
Feature / enable interceptors for NACK and retransmission of packets
2025-04-15 14:15:19 +02:00
Cédric Verstraeten
0fb7601dcb Update main.go 2025-04-15 13:49:18 +02:00
Cédric Verstraeten
07c6e680d1 Create default .env 2025-04-14 13:15:18 +02:00
cedricve
b972bc3040 Fix: Convert audio type to mpeg4audio.ObjectType in WriteMPEG4Audio function
Updated the WriteMPEG4Audio function to convert the audio type from forma.Config.Type to mpeg4audio.ObjectType. This change ensures that the correct object type is used when creating ADTSPacket instances for MPEG-4 audio.
2025-04-13 17:28:41 +00:00
cedricve
969d42dbca Remove Travis CI configuration and build script 2025-04-13 07:54:20 +00:00
Cédric Verstraeten
6680df9382 Merge pull request #171 from kerberos-io/feature/improve-dev-container
Improvement / Refactor Dockerfile and devcontainer configuration;
2025-04-13 09:52:20 +02:00
cedricve
8877157db5 Refactor Dockerfile and devcontainer configuration; add FFmpeg and Node.js installation 2025-04-13 07:46:06 +00:00
Cédric Verstraeten
ac814dc357 Merge pull request #168 from thanhtantran/master
add Vietnamese locales
2025-04-11 21:24:14 +02:00
Orange Pi Vietnam
4fcb12c3a3 Update translation.json
remove duplicate
2025-04-08 22:31:25 +07:00
Tony Tran
7bcc30f4b7 add Vietnamese locales 2025-02-27 15:06:41 +00:00
Cédric Verstraeten
481f917fcf Merge pull request #166 from kerberos-io/fix/candidategather-extended-buffer
Fix ice candidate gather and extended candidate buffer
2025-02-11 21:56:49 +01:00
Cedric Verstraeten
700a32e4c8 Update main.go 2025-02-11 21:51:14 +01:00
Cedric Verstraeten
b5a72d904e do allow more candidates even after connection state, remove sdpmid + remove sleep and unused code. 2025-02-11 19:58:36 +01:00
Cedric Verstraeten
cf3e491462 upgrade bento4 2025-02-11 12:12:04 +01:00
Cedric Verstraeten
6068705c07 Merge branch 'master' of https://github.com/kerberos-io/agent 2025-02-09 19:28:03 +01:00
Cedric Verstraeten
37beaa64d7 delay sending response answer 2025-02-09 19:28:01 +01:00
Cédric Verstraeten
8c5b03487b Merge pull request #165 from kerberos-io/fix/iceconnection-state-event
Correct peerconnection states + proper cleanup peerconnection
2025-02-09 11:50:35 +01:00
Cedric Verstraeten
360ae0c0db correct peerconnection states 2025-02-09 11:01:54 +01:00
Cédric Verstraeten
6aad8b7b35 Merge pull request #164 from kerberos-io/fix/webrtc-packettimestamp
Fix / WebRTC skip AAC audio + introduce packet timestamps
2025-02-06 10:38:40 +01:00
Cedric Verstraeten
9ce037fdc0 Update main.go 2025-02-06 10:35:36 +01:00
Cedric Verstraeten
0eb77ccd16 Update main.go 2025-02-06 10:15:47 +01:00
Cedric Verstraeten
fb876bd216 Update main.go 2025-02-06 08:32:30 +01:00
Cédric Verstraeten
865aec88fc Merge pull request #163 from kerberos-io/fix/webrtc-sample-timing
Fix / WebRTC sample timing
2025-02-06 08:32:04 +01:00
Cedric Verstraeten
9792bdf494 Update main.go 2025-02-06 08:28:17 +01:00
Cedric Verstraeten
d836e89e7f upgrade to v3.3.3 2025-02-05 20:55:08 +01:00
Cédric Verstraeten
53a52b3594 Merge pull request #162 from kerberos-io/fix/webrtc-sample-duration
fix / revert to old webrtc sample format (duration instead of packettimestamp)
2025-02-05 20:54:18 +01:00
Cedric Verstraeten
ba6ce25b21 revert to old webrtc sample format (duration instead of packettimestamp) 2025-02-05 20:50:57 +01:00
Cedric Verstraeten
8c9e18475f Update README.md 2025-01-31 16:35:11 +01:00
Cédric Verstraeten
4548d5328b Merge pull request #158 from kerberos-io/fix/force-mqtt-tos-level-2
Enabled TOS 2 for MQTT to ensure higher quality
2025-01-26 20:27:32 +01:00
Cedric Verstraeten
da870fe890 undo file 2025-01-26 20:23:14 +01:00
Cedric Verstraeten
66b660e688 enabled TOS 2 2025-01-26 20:17:04 +01:00
Cédric Verstraeten
08f8ca78d6 Merge pull request #157 from kerberos-io/upgrade/3.3.1
Upgrade to 3.3.1
2025-01-24 13:43:53 +01:00
Cedric Verstraeten
1e61e99005 Update main.go 2025-01-23 16:56:00 +01:00
Cédric Verstraeten
c272e1ab5c Merge pull request #155 from kerberos-io/upgrade/onvif-stable
Stable release onvif v1.0.0
2025-01-19 11:03:30 +01:00
Cedric Verstraeten
5cff11c0af upgrade onvif v1.0.0 2025-01-19 10:50:37 +01:00
Cédric Verstraeten
28b213779f Merge pull request #154 from kerberos-io/fix/memory-leak-onvif
Memory leak on SendSoap ONVIF library
2025-01-16 21:55:16 +01:00
Cedric Verstraeten
666ff202ad update go.sum 2025-01-16 21:49:27 +01:00
Cédric Verstraeten
9cb3c9753a Merge pull request #153 from kerberos-io/feature/global-decoder
Initiate decoders globally
2025-01-16 21:47:26 +01:00
Cedric Verstraeten
c4577e94b1 improve closing of responses 2025-01-16 21:46:19 +01:00
Cedric Verstraeten
9756183d3b upgrade onvif dependency 2025-01-16 21:46:02 +01:00
Cedric Verstraeten
83c65fe3d8 Merge branch 'master' into feature/global-decoder 2025-01-16 08:05:55 +01:00
Cedric Verstraeten
e6717c87cd update secrets 2025-01-16 08:05:46 +01:00
Cedric Verstraeten
5a3c1d6c9d Create pr-description.yaml 2025-01-16 08:05:05 +01:00
Cedric Verstraeten
81045ea955 Update gortsplib.go 2025-01-15 21:51:23 +01:00
Cedric Verstraeten
9f9fe3bd37 Update gortsplib.go 2025-01-15 21:48:36 +01:00
Cedric Verstraeten
84f7f844c9 Update Server.go 2025-01-15 16:53:35 +01:00
Cédric Verstraeten
4fde419db9 Merge pull request #151 from kerberos-io/fix/pullpoint-crash
Verify if device is nil, if so do not proceed (avoic panic)
2025-01-06 08:50:00 +01:00
Cédric Verstraeten
78cad6cf06 Merge pull request #152 from kerberos-io/feature/upgrade-dependencies
Upgrade dependencies
2025-01-06 08:49:51 +01:00
Cedric Verstraeten
4763e5a92e Update Dockerfile 2025-01-04 19:53:18 +01:00
Cedric Verstraeten
50939ee4ce upgrade dependcies 2025-01-04 19:49:02 +01:00
Cédric Verstraeten
884bc2acc1 Update gortsplib.go 2025-01-04 13:21:26 +01:00
Cédric Verstraeten
11fd041fa9 Update gortsplib.go 2025-01-04 13:04:02 +01:00
Cedric Verstraeten
a6d5c2b614 Verify if device is nil, if so do not proceed (avoic panic) 2025-01-03 23:08:39 +01:00
Cédric Verstraeten
9e3d705c6f Merge pull request #150 from kerberos-io/fix/align-pts2-webrtc
Fix/align pts2 webrtc
2025-01-02 17:20:59 +01:00
Cedric Verstraeten
1004731903 increase gop size 2025-01-02 17:18:06 +01:00
Cedric Verstraeten
9f2ec91688 Update main.go 2025-01-02 16:56:58 +01:00
Cedric Verstraeten
185135ed94 add legacy timing for MP4 2025-01-02 16:55:58 +01:00
Cedric Verstraeten
27e7d98c68 align with PTS2 2025-01-02 16:40:24 +01:00
Cedric Verstraeten
79f56771e3 align with pts2 2025-01-02 16:34:22 +01:00
Cedric Verstraeten
a7839147d6 Update main.go 2024-10-23 22:28:42 +02:00
Cedric Verstraeten
834d82d532 upgrade to webrtc v4, keep writing to track 2024-10-23 22:26:34 +02:00
Cedric Verstraeten
989f2f5943 commit new dependencies 2024-10-23 20:38:18 +02:00
Cedric Verstraeten
3af1df5b19 set realtime processing to false 2024-10-23 16:08:29 +02:00
Cedric Verstraeten
acf06e6e63 fix database client #2 2024-10-21 20:44:31 +02:00
Cedric Verstraeten
3f43e15cc2 fix database client 2024-10-21 20:41:31 +02:00
Cedric Verstraeten
c14683ec0d update database client 2024-10-18 15:48:04 +02:00
Cédric Verstraeten
213aaa5c15 update agent to 3.2.3 2024-09-14 19:34:19 +02:00
Cedric Verstraeten
9fb00c32d5 hotfix: revert webrtc version (stream broken) 2024-08-27 23:47:58 +02:00
Cedric Verstraeten
57ec08066c upgrade dependencies 2024-08-27 12:43:30 +02:00
Cedric Verstraeten
e0c6375261 IO fix: workaround for ONVIF event system 2024-08-25 20:27:46 +02:00
Cedric Verstraeten
79205abe29 keep the release notes 2024-08-21 10:42:47 +02:00
Cedric Verstraeten
24326558d0 only run release build on creation 2024-08-21 10:30:45 +02:00
Cedric Verstraeten
3f981c0f2f Update docker.yml 2024-08-21 10:26:21 +02:00
Cedric Verstraeten
b6eb7b8317 do not create a tag as github will do it 2024-08-21 10:20:20 +02:00
Cedric Verstraeten
4267ae6305 Update docker.yml 2024-08-21 10:18:33 +02:00
Cedric Verstraeten
0cb40bd93a Update docker.yml 2024-08-21 10:18:14 +02:00
Cedric Verstraeten
d2a8890a43 refactory github actions 2024-08-21 10:13:18 +02:00
Cedric Verstraeten
e5a5a5326b Update docker.yml 2024-08-21 10:10:35 +02:00
Cedric Verstraeten
61febd55c8 Update docker.yml 2024-08-21 10:09:15 +02:00
Cedric Verstraeten
3eac752654 Update docker.yml 2024-08-21 10:08:12 +02:00
Cedric Verstraeten
df4f1863fc use different release approach 2024-08-21 10:06:03 +02:00
Cedric Verstraeten
acee2784d3 improvement for IO's: detection for avigilon and axis cameras 2024-08-21 10:03:01 +02:00
Cedric Verstraeten
8ecb2f94a9 reference deployment guide on top of readme 2024-08-17 07:46:06 +02:00
Cedric Verstraeten
8657baf641 add architecture and reference deployments repo 2024-08-17 07:41:01 +02:00
Cedric Verstraeten
13d1948c9f Revert "test: add pkttimestamp and timestamp to samples to improve WebRTC streaming"
This reverts commit b067758915.
2024-08-14 14:05:22 +02:00
Cédric Verstraeten
8e8d51b719 Update README.md - add slack link 2024-08-12 22:53:45 +02:00
Cedric Verstraeten
ca2413363e [release] v3.1.9 2024-08-04 10:19:43 +02:00
Cedric Verstraeten
b067758915 test: add pkttimestamp and timestamp to samples to improve WebRTC streaming 2024-08-04 10:15:01 +02:00
Cédric Verstraeten
b2b8485b28 add networks 2024-07-24 12:35:09 +02:00
Kilian
c69d635431 Update docker-compose.yaml
Update example docker compose file
2024-07-05 15:50:30 +02:00
Cedric Verstraeten
a305ca36ce move vslaunch to top level and add react launcher 2024-07-05 13:35:33 +02:00
Cedric Verstraeten
a6a97b09f0 update devcontainer 2024-07-05 12:58:53 +02:00
Cédric Verstraeten
4d17a15633 Merge pull request #143 from KilianBoute/patch-1
Update README.md
2024-07-04 12:33:06 +02:00
Cédric Verstraeten
5fdb4b712e Merge pull request #142 from ghosty2004/master
Add Romanian language
2024-07-04 12:21:41 +02:00
Kilian
3d39251ac6 Update README.md 2024-07-04 11:42:05 +02:00
Cedric Verstraeten
9e59cd1596 add support for opus + update dependencies ho mod 2024-06-30 19:44:13 +02:00
ghosty2004
0ada943699 Add Romanian language 2024-06-26 21:04:02 +03:00
Cedric Verstraeten
ecadf7a4db add realtime processing endpoint 2024-06-11 22:47:01 +02:00
Cedric Verstraeten
413ed12fe2 deprecate older version 2024-05-05 22:33:38 +02:00
Cedric Verstraeten
6195fa5b9c upgrade go version to 1.22.2 2024-05-05 22:31:35 +02:00
Cedric Verstraeten
d31524ae52 upgrade go1.22.2 + webrtc/sdp libraries 2024-05-05 22:24:09 +02:00
Cédric Verstraeten
472a40a5f6 add extra space to run command 2024-04-06 11:50:11 +02:00
Cedric Verstraeten
fb9de04002 update dependencies + retry for ONVIF authentication 2024-03-17 11:07:15 +01:00
Cédric Verstraeten
3f29d1c46f fix wrong docker command 2024-01-30 20:47:13 +01:00
Cedric Verstraeten
b67a72ba9a [release] v3.1.8 2024-01-30 13:26:44 +01:00
Cedric Verstraeten
8fc9bc264d feature: add camera friendly name to UI 2024-01-30 11:21:58 +01:00
Cedric Verstraeten
b2589f498d hot-fix: embed friendly name in recording when set 2024-01-30 10:56:31 +01:00
Cedric Verstraeten
b1ff5134f2 feature: add double encryption
we are now encrypting to Kerberos Hub by default, secondary encryption can be added through bring your own encryption keys.

all encryption can be turned on/off if required
2024-01-17 20:53:42 +01:00
Cedric Verstraeten
3551d02d50 feature: add ability to force TURN server 2024-01-17 09:44:24 +01:00
Cedric Verstraeten
4c413012a4 [release] v3.1.7 2024-01-16 13:02:41 +01:00
Cedric Verstraeten
74ea2f6cdd hot-fix: make sure webrtc candidates are assigned to the correct session 2024-01-16 12:55:31 +01:00
Cedric Verstraeten
2a7d9b62d4 warning: printing the work sub url 2024-01-16 10:49:40 +01:00
Cedric Verstraeten
21d81b94dd [release] v3.1.6 2024-01-16 09:47:07 +01:00
Cedric Verstraeten
091662ff26 hot-fix: support avigilon backchannel 2024-01-16 09:39:34 +01:00
Cedric Verstraeten
803e8f55ef correct webrtc audio buffer duration 2024-01-14 21:37:58 +01:00
Cedric Verstraeten
14d38ecf08 [release] v3.1.5 2024-01-12 15:28:48 +01:00
Cedric Verstraeten
34d945055b Update main.go 2024-01-12 11:49:44 +01:00
Cedric Verstraeten
8c44da8233 hide passwords in ui + skip empty decode frames 2024-01-12 11:47:08 +01:00
Cédric Verstraeten
a8b79947ef Update README.md 2024-01-12 09:53:38 +01:00
Cedric Verstraeten
7c653f809d upgrqde dependencies + move file (decap) 2024-01-11 23:05:46 +01:00
Cedric Verstraeten
49f1603f40 align more blocking methods 2024-01-11 22:43:16 +01:00
Cedric Verstraeten
b4369ea932 improve non-blocking approve for agents tend to restart for some strange reason 2024-01-11 22:35:56 +01:00
Cedric Verstraeten
83ba7baa4b [release] v3.1.4
- hot-fix: preserve width and height of both main and sub stream
2024-01-10 17:06:49 +01:00
Cedric Verstraeten
9339ae30fd [release] v3.1.3 2024-01-10 16:30:20 +01:00
Cedric Verstraeten
c18f2bd445 remove file logger 2024-01-10 16:29:37 +01:00
Cedric Verstraeten
319876bbb0 hot-fix: onvif pull message might be empty 2024-01-10 16:28:40 +01:00
Cedric Verstraeten
442ba97c61 [release] v3.1.2
hot-fix: for missing SPS and PPS from opening codec.
2024-01-09 13:13:42 +01:00
Cedric Verstraeten
00e0b0b547 hot fix: capture SSP and PPS in a later decode, it might not be provided at the initialization, and keep it up to date. 2024-01-09 12:07:32 +01:00
Cedric Verstraeten
145f478249 go mod - upgrade dependencies 2024-01-08 13:08:33 +01:00
Cedric Verstraeten
aac2150a3a [release] v3.1.1 2024-01-07 22:14:44 +01:00
Cedric Verstraeten
9b713637b9 change version number of ui 2024-01-07 21:44:32 +01:00
Cedric Verstraeten
699660d472 only make release when putting [release] 2024-01-07 21:41:32 +01:00
Cedric Verstraeten
751aa17534 feature: make hub encryption configurable + only send heartbeat to vault when credentials are set 2024-01-07 21:30:57 +01:00
Cedric Verstraeten
2681bd2fe3 hot fix: keep track of main and sub stream separately (one of them might block) 2024-01-07 20:20:51 +01:00
Cedric Verstraeten
93adb3dabc different order in action 2024-01-07 08:29:53 +01:00
Cedric Verstraeten
0e15e58a88 try once more different format 2024-01-07 08:26:34 +01:00
Cedric Verstraeten
ef2ea999df only run release to docker when containing [release] 2024-01-07 08:22:24 +01:00
Cedric Verstraeten
ca367611d7 Update docker-nightly.yml 2024-01-07 08:15:24 +01:00
Cedric Verstraeten
eb8f073856 Merge branch 'master' into develop 2024-01-03 22:03:00 +01:00
Cedric Verstraeten
3ae43eba16 hot fix: close client on verifying connection (will keep client open) 2024-01-03 22:02:44 +01:00
Cedric Verstraeten
9719a08eaa Merge branch 'master' into develop 2024-01-03 21:54:30 +01:00
Cedric Verstraeten
1e165cbeb8 hotfix: try to create pullpoint subscription if first time failed 2024-01-03 18:44:53 +01:00
Cedric Verstraeten
8be8cafd00 force release mode in GIN 2024-01-03 18:26:10 +01:00
Cedric Verstraeten
e74d2aadb5 Merge branch 'develop' 2024-01-03 18:16:23 +01:00
Cedric Verstraeten
9c97422f43 properly handle cameras without PTZ function 2024-01-03 18:12:02 +01:00
Cedric Verstraeten
deb0a3ff1f hotfix: position or zoom can be nil 2024-01-03 13:37:38 +01:00
Cedric Verstraeten
95ed1f0e97 move error to debug 2024-01-03 12:36:08 +01:00
Cedric Verstraeten
6a111dadd6 typo in readme (wrong formatting link) 2024-01-03 12:24:35 +01:00
Cedric Verstraeten
95b3623c04 change startup command (new flag method) 2024-01-03 12:19:18 +01:00
Cedric Verstraeten
326d62a640 snap was moved to dedicated repository to better control release: https://github.com/kerberos-io/snap-agent
the repository https://github.com/kerberos-io/snap-agent is linked to the snap build system and will generate new releases
2024-01-03 12:17:47 +01:00
Cedric Verstraeten
9d990650f3 hotfix: onvif endpoint changed 2024-01-03 10:19:04 +01:00
Cedric Verstraeten
4bc891b640 hotfix: move from warning to debug 2024-01-03 10:12:18 +01:00
Cedric Verstraeten
1f133afb89 Merge branch 'develop' 2024-01-03 09:57:51 +01:00
Cedric Verstraeten
8da34a6a1a hotfix: restart agent when nog rtsp url was defined 2024-01-03 09:56:56 +01:00
Cédric Verstraeten
57c49a8325 Update snapcraft.yaml 2024-01-02 22:16:41 +01:00
Cedric Verstraeten
f739d52505 Update docker-nightly.yml 2024-01-01 23:46:12 +01:00
Cedric Verstraeten
793022eb0f no longer support go '1.17', '1.18', '1.19', 2024-01-01 23:41:45 +01:00
Cedric Verstraeten
6b1fd739f4 add as safe directory 2024-01-01 23:38:50 +01:00
Cedric Verstraeten
4efa7048dc add runner user - setup as a workaround 2024-01-01 23:33:08 +01:00
Cedric Verstraeten
4931700d06 try checkout v4, you never know.. 2024-01-01 23:29:50 +01:00
Cedric Verstraeten
4bd49dbee1 run go build as specific user 2024-01-01 23:25:32 +01:00
Cedric Verstraeten
c278a66f0e make go versions as string, removes the 0 (weird issue though) 2024-01-01 23:18:55 +01:00
Cedric Verstraeten
d64e6b631c extending versions + base image 2024-01-01 23:16:50 +01:00
Cedric Verstraeten
fa91e84977 Merge branch 'port-to-gortsplib' into develop 2024-01-01 23:11:24 +01:00
Cedric Verstraeten
8c231d3b63 Merge branch 'master' into develop 2024-01-01 23:10:36 +01:00
Cedric Verstraeten
775c1b7051 show correct error message for failing onvif 2024-01-01 19:36:14 +01:00
Cedric Verstraeten
fb23815210 add support for H265 in UI 2024-01-01 19:31:58 +01:00
Cedric Verstraeten
5261c1cbfc debug condition 2023-12-31 15:46:25 +01:00
Cedric Verstraeten
f2aa3d9176 onvif is enabled, currently expects ptz, which is not the case 2023-12-30 22:07:45 +01:00
Cedric Verstraeten
113b02d665 Update Cloud.go 2023-12-30 09:18:46 +01:00
Cedric Verstraeten
957d2fd095 Update Cloud.go 2023-12-29 14:59:34 +01:00
Cedric Verstraeten
78e7fb595a make sure to set onvifEventsList = []byte("[]") 2023-12-29 11:37:32 +01:00
Cedric Verstraeten
b5415284e2 rename + add conceptual hidden function (not yet added) 2023-12-29 08:10:01 +01:00
Cedric Verstraeten
e94a9a1000 update base image 2023-12-28 16:33:39 +01:00
Cedric Verstraeten
60bb9a521c Update README.md 2023-12-28 11:32:46 +01:00
Cedric Verstraeten
3ac34a366f Update README.md 2023-12-28 11:29:33 +01:00
Cedric Verstraeten
77449a29e7 add h264 and h265 discussion 2023-12-28 11:24:36 +01:00
Cedric Verstraeten
242ff48ab6 add more description error with onvif invalid credentials + send capabilitites as part of onvif/login or verify 2023-12-28 10:55:11 +01:00
Cedric Verstraeten
b71dbddc1a add support for snapshots (raw + base64) #130
also tweaked the logging as bit more
2023-12-28 10:24:15 +01:00
Cedric Verstraeten
6407f3da3d recover from failled pullpoint subscription 2023-12-28 08:22:37 +01:00
Cedric Verstraeten
776571c7b3 improve logging 2023-12-27 14:30:12 +01:00
Cedric Verstraeten
2df35a1999 add remote trigger relay output (mqtt endpoint) + rename a few methods 2023-12-27 10:39:12 +01:00
Cedric Verstraeten
b1ab6bf522 improve logging + updated readme 2023-12-27 10:25:03 +01:00
Cedric Verstraeten
e7fd0bd8a3 add logging output variable (json or text) + improve logging 2023-12-27 10:06:55 +01:00
Cedric Verstraeten
4f5597c441 remove unnecessary prints 2023-12-25 23:10:04 +01:00
Cedric Verstraeten
400457af9f upgrade onvif to 14 2023-12-25 21:37:35 +01:00
Cedric Verstraeten
c48e3a5683 Update go.mod 2023-12-25 21:01:52 +01:00
Cedric Verstraeten
67064879e4 input/output methods 2023-12-25 20:55:51 +01:00
Cedric Verstraeten
698b9c6b54 cleanup comments + add ouputs 2023-12-15 15:07:25 +01:00
Cedric Verstraeten
0e8a89c4c3 add onvif inputs function 2023-12-12 23:34:04 +01:00
Cedric Verstraeten
b0bcf73b52 add condition uri implementation, wrapped condition class so it's easier to extend 2023-12-12 17:30:41 +01:00
Cedric Verstraeten
15a51e7987 align logging 2023-12-12 09:52:35 +01:00
Cedric Verstraeten
b5f5567bcf cleanup names of files (still need more cleanup)+ rework discover method + separated conditions in separate package 2023-12-12 09:15:54 +01:00
Cedric Verstraeten
9151b38e7f document more swagger endpoints + cleanup source 2023-12-11 21:02:01 +01:00
Cedric Verstraeten
898b3a52c2 update loggin + add new swagger endpoints 2023-12-11 20:32:03 +01:00
Cedric Verstraeten
be6eb6165c get keyframe and decode on requesting config (required for factory) 2023-12-10 23:13:42 +01:00
Cedric Verstraeten
e95f545bf4 upgrade deps + fix nil error 2023-12-09 23:02:18 +01:00
Cedric Verstraeten
fd01fc640e get rid of snapshots + was blocking stream and corrupted recordings 2023-12-07 21:33:32 +01:00
Cedric Verstraeten
8cfcfe4643 upgrade onvif 2023-12-07 19:33:18 +01:00
Cedric Verstraeten
60d7b4b356 if we have no backchannel we'll skip the setup 2023-12-06 19:03:36 +01:00
Cedric Verstraeten
9b796c049d mem leak for http close (still one) + not closing some channels properly 2023-12-06 18:53:55 +01:00
Cedric Verstraeten
c8c9f6dff1 implement better logging, making logging levels configurable (WIP) 2023-12-05 23:05:59 +01:00
Cedric Verstraeten
8293d29ee8 make recording write directly to file + fix memory leaks with http on ONVIF API 2023-12-05 22:07:29 +01:00
Cedric Verstraeten
34a0d8f5c4 force TCP + ignore motion detection if no region is set 2023-12-05 08:30:00 +01:00
Cedric Verstraeten
0a195a0dfb Update Dockerfile 2023-12-04 14:47:53 +01:00
Cedric Verstraeten
c82ead31f2 decode using H265 2023-12-04 14:02:41 +01:00
Cedric Verstraeten
3ab4b5b54b OOPS: missing encryption at some points 2023-12-03 20:12:23 +01:00
Cedric Verstraeten
5765f7c4f6 additional checks for closed decoder + properly close recording when closed 2023-12-03 20:10:05 +01:00
Cedric Verstraeten
d1dd30577b get rid of VPS, fails to write in h265 (also upgrade dependencies) 2023-12-03 19:18:01 +01:00
Cedric Verstraeten
1145008c62 reference implementation for transcoding from MULAW to AAC 2023-12-03 09:53:20 +01:00
Cedric Verstraeten
3f1e01e665 dont panic on fail bachchannel 2023-12-03 08:14:56 +01:00
Cedric Verstraeten
ced9355b78 Run Backchannel on a seperate Gortsplib instance 2023-12-02 22:28:26 +01:00
Cedric Verstraeten
6e7ade036e add logging + fix private key pass through + fixed crash on websocket livestreaming 2023-12-02 21:30:07 +01:00
Cedric Verstraeten
976fbb65aa Update Kerberos.go 2023-12-02 15:41:36 +01:00
Cedric Verstraeten
ba7f870d4b wait a bit to close the motion channel, also close audio channel 2023-12-02 15:18:49 +01:00
Cedric Verstraeten
cb3dce5ffd closing 2023-12-02 13:07:52 +01:00
Cedric Verstraeten
b317a6a9db fix closing of rtspclient + integrate h265 support
now we can record in H265 and stream in H264 using webrtc or websocket
2023-12-02 12:34:28 +01:00
Cedric Verstraeten
e42f430bb8 add MPEG4 (AAC support), put ready for H265 2023-12-02 00:43:31 +01:00
Cedric Verstraeten
bd984ea1c7 works now, but needed to change size of paylod 2023-12-01 23:17:32 +01:00
Cedric Verstraeten
6798569b7f first try for the backchannel using gortsplib
getting error short buffer
2023-12-01 22:57:33 +01:00
Cedric Verstraeten
df3183ec1c add backchannel support 2023-12-01 22:18:06 +01:00
Cedric Verstraeten
25c35ba91b fix hull 2023-12-01 21:27:58 +01:00
Cedric Verstraeten
68b9c5f679 fix videostream for subclient 2023-12-01 20:24:35 +01:00
Cedric Verstraeten
9757bc9b18 Calculate width and height + add FPS 2023-12-01 19:47:31 +01:00
Cedric Verstraeten
1e4affbf5c dont write trailer do +1 prerecording reader 2023-12-01 15:05:39 +01:00
Cedric Verstraeten
22f4a7e08a fix closing of stream 2023-12-01 11:05:58 +01:00
Cedric Verstraeten
044e167dd2 add lock + motion detection 2023-12-01 08:34:09 +01:00
Cedric Verstraeten
bffd377461 add substream 2023-11-30 21:33:14 +01:00
Cedric Verstraeten
677c9e334b add decoder, fix livestream 2023-11-30 21:01:57 +01:00
Cedric Verstraeten
df38784a8d fixes 2023-11-30 17:34:03 +01:00
Cedric Verstraeten
dae2c1b5c4 fix keyframing 2023-11-30 17:17:10 +01:00
Cedric Verstraeten
fd6449b377 remove dtsextractor is blocks the stream 2023-11-30 14:50:09 +01:00
Cedric Verstraeten
cd09ed3321 fix 2023-11-30 14:33:12 +01:00
Cedric Verstraeten
e7dc9aa64d swap to joy4 2023-11-30 14:10:07 +01:00
Cedric Verstraeten
fec2587b6d Update Gortsplib.go 2023-11-30 13:49:46 +01:00
Cedric Verstraeten
7c285d36a1 isolate rtsp clients to be able to pass them through 2023-11-30 13:45:34 +01:00
Cedric Verstraeten
ed46cbe35a cleanup enable more features 2023-11-30 00:47:30 +01:00
Cedric Verstraeten
0a8f097c76 cleanup and fix for recording (wrong DTS value) + fix for recording using "old" joy library 2023-11-29 19:33:03 +01:00
Cedric Verstraeten
bce5d443d5 try new muxer 2023-11-29 17:18:51 +01:00
Cedric Verstraeten
19bf456bda adding fragmented mp4 (not working) trying to fix black screen on quicktime player mp4 2023-11-29 16:28:09 +01:00
Cedric Verstraeten
1359858e42 updates and cleanup 2023-11-29 15:01:36 +01:00
Cedric Verstraeten
55b1abe243 Add mp4 muxer, still some work to do 2023-11-29 10:21:58 +01:00
Cedric Verstraeten
c6428d8c5a Fix for WebRTC using new library had to encode nalu 2023-11-27 17:05:55 +01:00
Cedric Verstraeten
e241a03fc4 comment out unused code! 2023-11-26 17:30:05 +01:00
Cedric Verstraeten
ac2b99a3dd inherit from golibrtsp rtp.packet + fix the decoding for livestream + motion 2023-11-26 16:58:55 +01:00
Cedric Verstraeten
341a6a7fae refactoring the rtspclient to be able to swap out easily 2023-11-26 00:07:53 +01:00
Cedric Verstraeten
e74facfb7f fix: blocking state candidates 2023-11-23 22:21:56 +01:00
Cedric Verstraeten
54bc1989f9 fix: update locking webrtc 2023-11-23 21:17:39 +01:00
Cedric Verstraeten
94b71a0868 fix: enabling backchannel on the mainstream 2023-11-20 09:57:55 +01:00
Cedric Verstraeten
c071057eec hotfix: do fallback without backchannel if camera didnt support it, some cameras such as Dahua will fail on the header. 2023-11-20 09:35:41 +01:00
Cedric Verstraeten
e8a355d992 upgrade joy4: add setreaddeadline for RTSP connection 2023-11-19 21:40:08 +01:00
Cedric Verstraeten
ca84664071 hotfix: add locks to make sure candidates are not send to a closed candidate channel 2023-11-18 20:38:29 +01:00
Cedric Verstraeten
dd7fcb31b1 Add ONVIF backchannel functionality with G711 encoding 2023-11-17 16:28:03 +01:00
Cédric Verstraeten
324fffde6b Merge pull request #125 from Izzotop/feat/add-russian-language-support
Add Russian language
2023-11-14 21:22:33 +01:00
Izzotop
cd8347d20f Add Russian language 2023-11-09 16:03:40 +03:00
Cedric Verstraeten
efcbf52b06 Merge branch 'master' of https://github.com/kerberos-io/agent 2023-11-06 17:07:50 +01:00
Cedric Verstraeten
c33469a7b3 add --fix-missing to fix random broken builds (armv6 image) 2023-11-06 17:07:35 +01:00
Cédric Verstraeten
3717535f0b Merge pull request #121 from Chaitanya110703/patch-1
doc(README): remove typo
2023-11-06 16:54:28 +01:00
Cedric Verstraeten
8eb2de5e28 When Kerberos Vault is configured without Kerberos Hub, cameras do not show up in Kerberos Vault #123 2023-11-06 14:37:54 +01:00
Cedric Verstraeten
96f6bcb1dd test file in wrong directory 2023-11-03 13:18:00 +01:00
Cedric Verstraeten
860077a3eb turn off linting for: jsx-a11y/control-has-associated-label 2023-11-02 08:28:26 +01:00
Cedric Verstraeten
8be9343314 upgrade joy issues with audio codec (wrong FFMPEG version) 2023-11-02 08:15:32 +01:00
Cedric Verstraeten
dac04fbb57 upgrade joy for https://github.com/kerberos-io/agent/issues/105 2023-11-01 22:03:48 +01:00
Cedric Verstraeten
b9acf4c150 hot fix: factory needs to override encryption settings 2023-10-24 22:50:55 +02:00
Chaitanya110703
6608018f86 doc(README): remove typo 2023-10-24 21:25:45 +05:30
Cedric Verstraeten
552f5dbea6 hotfix: check if encryption is set for old agents 2023-10-24 17:52:09 +02:00
Cedric Verstraeten
2844a5a419 webrtc: disable relay allow other 2023-10-24 16:44:42 +02:00
Cedric Verstraeten
c4b9610f58 hotfix: mqtt webrtc - wrong session key 2023-10-24 16:22:15 +02:00
Cedric Verstraeten
6a44498730 hot fix: readd locks 2023-10-24 13:39:17 +02:00
Cedric Verstraeten
a2cebaf90b hot fix: wait for token in webrtc 2023-10-24 13:14:14 +02:00
Cedric Verstraeten
3f58f26dfd decrypt recordings through the UI automatically using the existing AES key, you can still use the decrypt action or openssl afterwards 2023-10-23 14:38:29 +02:00
Cedric Verstraeten
a8d5f56f1e hotfix - build error encryption key value 2023-10-23 11:07:54 +02:00
Cédric Verstraeten
1eb62d80c7 add encryption + end-to-end encryption to feature list 2023-10-23 10:59:13 +02:00
Cedric Verstraeten
e474a62dbc Add hindi #119 + allow recordings encryption + decryption tooling. 2023-10-23 10:56:36 +02:00
Cédric Verstraeten
f29b952001 Merge pull request #119 from fadkeabhi/feat#47-add-hindi-language-support
Added transaltions for hindi language
2023-10-22 22:15:07 +02:00
Cedric Verstraeten
38247ac9f6 Add italian to language selector #115 2023-10-22 20:00:10 +02:00
Cédric Verstraeten
580f17028a Merge pull request #115 from LeoSpyke/master
i18n: adds Italian locale
2023-10-22 19:56:55 +02:00
Cedric Verstraeten
48d933a561 backwards compatible when no encryption key was added in previous config 2023-10-20 14:35:09 +02:00
Cedric Verstraeten
0c70ab6158 Refactor MQTT endpoints + Introduce End-to-End encryption using RSA and AES keys + finetune PTZ 2023-10-20 13:31:02 +02:00
ABHISHEK FADAKE
839185dac8 Added transaltions for hindi language 2023-10-03 19:24:47 +05:30
LeoSpyke
ba6cdef9d5 i18n(it): translate persistence and bugfix 2023-09-15 08:17:12 +00:00
LeoSpyke
bedb3c0d7f Merge branch 'kerberos-io:master' into master 2023-09-14 12:47:46 +02:00
Leonardo Papini
2539255940 i18n: Italian translations 2023-09-14 12:47:28 +02:00
Cedric Verstraeten
24136f8b15 we didn't reset the main configuration, causing some config vars still to be set 2023-09-14 10:47:18 +02:00
Cedric Verstraeten
910bb3c079 merging timetable was giving issues 2023-09-14 10:13:50 +02:00
Cedric Verstraeten
47f4c19617 Update Config.go 2023-09-13 08:14:25 +02:00
Cedric Verstraeten
280a81809a Update Config.go 2023-09-12 22:38:26 +02:00
Cedric Verstraeten
59358acb30 add logging + empty friendly name 2023-09-12 15:17:56 +02:00
Cedric Verstraeten
ebd655ac73 Allow remote configuration through MQTT + restructure config method 2023-09-12 10:50:36 +02:00
Cedric Verstraeten
6325e37aae empty presets caused hub connection failing 2023-09-07 08:16:46 +02:00
Cedric Verstraeten
ecabc47847 integrate ondevice configurated presets 2023-08-30 14:12:07 +02:00
Cedric Verstraeten
31cc3d8939 Rely on continuous move will fix the PTZFunctions later 2023-08-29 14:53:48 +02:00
Cedric Verstraeten
d2dd3dfa62 add outputconfiguration + change endpoint 2023-06-21 15:55:51 +02:00
119 changed files with 18583 additions and 4413 deletions

View File

@@ -5,7 +5,7 @@ version: 2
jobs:
machinery:
docker:
- image: kerberos/base:91ab4d4
- image: kerberos/base:0a50dc9
working_directory: /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}
steps:
- checkout

View File

@@ -1,2 +1,26 @@
FROM kerberos/devcontainer:b2bc659
LABEL AUTHOR=Kerberos.io
FROM mcr.microsoft.com/devcontainers/go:1.24-bookworm
# Install node environment
RUN apt-get update && \
apt-get install -y --no-install-recommends \
nodejs \
npm \
&& rm -rf /var/lib/apt/lists/*
# Install ffmpeg
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ffmpeg \
libavcodec-extra \
libavutil-dev \
libavformat-dev \
libavfilter-dev \
libavdevice-dev \
libswscale-dev \
libswresample-dev \
&& rm -rf /var/lib/apt/lists/*
USER vscode
# Install go swagger
RUN go install github.com/swaggo/swag/cmd/swag@latest

View File

@@ -1,33 +1,24 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/docker-existing-dockerfile
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "A Dockerfile containing FFmpeg, OpenCV, Go and Yarn",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "./Dockerfile",
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
3000,
80
"name": "go:1.24-bookworm",
"runArgs": [
"--name=agent",
"--network=host"
],
// Uncomment the next line to run commands after the container is created - for example installing curl.
"postCreateCommand": "cd ui && yarn install && yarn build && cd ../machinery && go mod download",
"features": {
"ghcr.io/devcontainers-contrib/features/ansible:1": {}
},
"dockerFile": "Dockerfile",
"customizations": {
"vscode": {
"extensions": [
"ms-kubernetes-tools.vscode-kubernetes-tools",
"GitHub.copilot"
"GitHub.copilot",
"ms-azuretools.vscode-docker",
"mongodb.mongodb-vscode"
]
}
},
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
"forwardPorts": [
3000,
8080
],
"postCreateCommand": "cd ui && yarn install && yarn build && cd ../machinery && go mod download"
}

View File

@@ -1,58 +0,0 @@
name: Docker development build
on:
push:
branches: [ develop ]
jobs:
build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t kerberos/agent-dev:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
- name: Create new and append to latest manifest
run: docker buildx imagetools create -t kerberos/agent-dev:latest kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
build-other:
runs-on: ubuntu-latest
strategy:
matrix:
#architecture: [arm64, arm/v7, arm/v6]
architecture: [arm64, arm/v7]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t kerberos/agent-dev:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
- name: Create new and append to manifest latest
run: docker buildx imagetools create --append -t kerberos/agent-dev:latest kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)

View File

@@ -1,54 +0,0 @@
name: Docker nightly build
on:
# Triggers the workflow every day at 9PM (CET).
schedule:
- cron: "0 22 * * *"
jobs:
build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
build-other:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [arm64, arm/v7, arm/v6]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)

View File

@@ -1,114 +0,0 @@
name: Docker master build
on:
push:
branches: [ master ]
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-latest
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}} --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t $REPO:${{ steps.short-sha.outputs.sha }} $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}}
- name: Create new and append to manifest latest
run: docker buildx imagetools create -t $REPO:latest $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}}
- name: Run Buildx with output
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-$(echo ${{matrix.architecture}} | tr / -)-${{steps.short-sha.outputs.sha}} --output type=tar,dest=output-${{matrix.architecture}}.tar .
- name: Strip binary
run: mkdir -p output/ && tar -xf output-${{matrix.architecture}}.tar -C output && rm output-${{matrix.architecture}}.tar && cd output/ && tar -cf ../agent-${{matrix.architecture}}.tar -C home/agent . && rm -rf output
# We'll make a GitHub release and push the build (tar) as an artifact
- uses: rickstaa/action-create-tag@v1
with:
tag: ${{ steps.short-sha.outputs.sha }}
message: "Release ${{ steps.short-sha.outputs.sha }}"
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
name: ${{ steps.short-sha.outputs.sha }}
tag: ${{ steps.short-sha.outputs.sha }}
artifacts: "agent-${{matrix.architecture}}.tar"
# Taken from GoReleaser's own release workflow.
# The available Snapcraft Action has some bugs described in the issue below.
# The mkdirs are a hack for https://github.com/goreleaser/goreleaser/issues/1715.
#- name: Setup Snapcraft
# run: |
# sudo apt-get update
# sudo apt-get -yq --no-install-suggests --no-install-recommends install snapcraft
# mkdir -p $HOME/.cache/snapcraft/download
# mkdir -p $HOME/.cache/snapcraft/stage-packages
#- name: Use Snapcraft
# run: tar -xf agent-${{matrix.architecture}}.tar && snapcraft
build-other:
runs-on: ubuntu-latest
permissions:
contents: write
needs: build-amd64
strategy:
matrix:
architecture: [arm64, arm-v7, arm-v6]
#architecture: [arm64, arm-v7]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}} --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t $REPO:${{ steps.short-sha.outputs.sha }} $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}}
- name: Create new and append to manifest latest
run: docker buildx imagetools create --append -t $REPO:latest $REPO-arch:arch-${{matrix.architecture}}-${{steps.short-sha.outputs.sha}}
- name: Run Buildx with output
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-$(echo ${{matrix.architecture}} | tr / -)-${{steps.short-sha.outputs.sha}} --output type=tar,dest=output-${{matrix.architecture}}.tar .
- name: Strip binary
run: mkdir -p output/ && tar -xf output-${{matrix.architecture}}.tar -C output && rm output-${{matrix.architecture}}.tar && cd output/ && tar -cf ../agent-${{matrix.architecture}}.tar -C home/agent . && rm -rf output
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
allowUpdates: true
name: ${{ steps.short-sha.outputs.sha }}
tag: ${{ steps.short-sha.outputs.sha }}
artifacts: "agent-${{matrix.architecture}}.tar"

View File

@@ -2,35 +2,37 @@ name: Go
on:
push:
branches: [ develop, master ]
branches: [develop, master]
pull_request:
branches: [ develop, master ]
branches: [develop, master]
jobs:
build:
name: Build
runs-on: ubuntu-latest
container:
image: kerberos/base:70d69dc
image: kerberos/base:eb6b088
strategy:
matrix:
go-version: [1.17, 1.18, 1.19]
#No longer supported Go versions.
#go-version: ['1.17', '1.18', '1.19', '1.20', '1.21']
go-version: ["1.24"]
steps:
- name: Set up Go ${{ matrix.go-version }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Check out code into the Go module directory
uses: actions/checkout@v3
- name: Get dependencies
run: cd machinery && go mod download
- name: Build
run: cd machinery && go build -v ./...
- name: Vet
run: cd machinery && go vet -v ./...
- name: Test
run: cd machinery && go test -v ./...
- name: Set up Go ${{ matrix.go-version }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Check out code into the Go module directory
uses: actions/checkout@v4
- name: Set up git ownershi
run: git config --system --add safe.directory /__w/agent/agent
- name: Get dependencies
run: cd machinery && go mod download
- name: Build
run: cd machinery && go build -v ./...
- name: Vet
run: cd machinery && go vet -v ./...
- name: Test
run: cd machinery && go test -v ./...

View File

@@ -0,0 +1,51 @@
name: Create User Story Issue
on:
workflow_dispatch:
inputs:
issue_title:
description: 'Title for the issue'
required: true
issue_description:
description: 'Brief description of the feature'
required: true
complexity:
description: 'Complexity of the feature'
required: true
type: choice
options:
- 'Low'
- 'Medium'
- 'High'
default: 'Medium'
duration:
description: 'Estimated duration'
required: true
type: choice
options:
- '1 day'
- '3 days'
- '1 week'
- '2 weeks'
- '1 month'
default: '1 week'
jobs:
create-issue:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Create Issue with User Story
uses: cedricve/llm-create-issue-user-story@main
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
azure_openai_api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure_openai_endpoint: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
azure_openai_version: ${{ secrets.AZURE_OPENAI_VERSION }}
openai_model: ${{ secrets.OPENAI_MODEL }}
issue_title: ${{ github.event.inputs.issue_title }}
issue_description: ${{ github.event.inputs.issue_description }}
complexity: ${{ github.event.inputs.complexity }}
duration: ${{ github.event.inputs.duration }}
labels: 'user-story,feature'
assignees: ${{ github.actor }}

60
.github/workflows/nightly-build.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: Nightly build
on:
# Triggers the workflow every day at 9PM (CET).
schedule:
- cron: "0 22 * * *"
# Allows manual triggering from the Actions tab.
workflow_dispatch:
jobs:
nightly-build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v4
with:
ref: master
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
nightly-build-other:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [arm64, arm/v7, arm/v6]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v4
with:
ref: master
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)

75
.github/workflows/pr-build.yml vendored Normal file
View File

@@ -0,0 +1,75 @@
name: Build pull request
on:
pull_request:
types: [opened, synchronize]
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-24.04
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build -t ${{matrix.architecture}} .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
build-arm64:
runs-on: ubuntu-24.04-arm
permissions:
contents: write
strategy:
matrix:
architecture: [arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build -t ${{matrix.architecture}} -f Dockerfile.arm64 .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar

26
.github/workflows/pr-description.yaml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: Autofill PR description
on: pull_request
env:
ORGANIZATION: uugai
PROJECT: ${{ github.event.repository.name }}
PR_NUMBER: ${{ github.event.number }}
jobs:
openai-pr-description:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Autofill PR description if empty using OpenAI
uses: cedricve/azureopenai-pr-description@master
with:
github_token: ${{ secrets.TOKEN }}
openai_api_key: ${{ secrets.OPENAI_API_KEY }}
azure_openai_api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure_openai_endpoint: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
azure_openai_version: ${{ secrets.AZURE_OPENAI_VERSION }}
openai_model: ${{ secrets.OPENAI_MODEL }}
pull_request_url: https://pr${{ env.PR_NUMBER }}.api.kerberos.lol
overwrite_description: true

130
.github/workflows/release-create.yml vendored Normal file
View File

@@ -0,0 +1,130 @@
name: Create a new release
on:
release:
types: [created]
workflow_dispatch:
inputs:
tag:
description: "Tag for the Docker image"
required: true
default: "test"
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-24.04
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build --provenance=false --build-arg VERSION=${{github.event.inputs.tag || github.ref_name}} -t ${{matrix.architecture}} .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Build and push Docker image
run: |
docker tag ${{matrix.architecture}} $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
docker push $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
build-arm64:
runs-on: ubuntu-24.04-arm
permissions:
contents: write
strategy:
matrix:
architecture: [arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build --provenance=false --build-arg VERSION=${{github.event.inputs.tag || github.ref_name}} -t ${{matrix.architecture}} -f Dockerfile.arm64 .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Build and push Docker image
run: |
docker tag ${{matrix.architecture}} $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
docker push $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
create-manifest:
runs-on: ubuntu-24.04
needs: [build-amd64, build-arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Create and push multi-arch manifest
run: |
docker manifest create $REPO:${{ github.event.inputs.tag || github.ref_name }} \
$REPO-arch:arch-amd64-${{github.event.inputs.tag || github.ref_name}} \
$REPO-arch:arch-arm64-${{github.event.inputs.tag || github.ref_name}}
docker manifest push $REPO:${{ github.event.inputs.tag || github.ref_name }}
- name: Create and push latest manifest
run: |
docker manifest create $REPO:latest \
$REPO-arch:arch-amd64-${{github.event.inputs.tag || github.ref_name}} \
$REPO-arch:arch-arm64-${{github.event.inputs.tag || github.ref_name}}
docker manifest push $REPO:latest
if: github.event.inputs.tag == 'test'
create-release:
runs-on: ubuntu-24.04
needs: [build-amd64, build-arm64]
permissions:
contents: write
steps:
- name: Download all artifacts
uses: actions/download-artifact@v4
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
allowUpdates: true
name: ${{ github.event.inputs.tag || github.ref_name }}
tag: ${{ github.event.inputs.tag || github.ref_name }}
generateReleaseNotes: false
omitBodyDuringUpdate: true
artifacts: "agent-*.tar/agent-*.tar"

8
.gitignore vendored
View File

@@ -1,6 +1,8 @@
ui/node_modules
ui/build
ui/public/assets/env.js
.DS_Store
__debug*
.idea
machinery/www
yarn.lock
@@ -10,5 +12,7 @@ machinery/data/recordings
machinery/data/snapshots
machinery/test*
machinery/init-dev.sh
machinery/.env
deployments/docker/private-docker-compose.yaml
machinery/.env.local
machinery/vendor
deployments/docker/private-docker-compose.yaml
video.mp4

View File

@@ -1,19 +0,0 @@
language: go
go:
- 1.12.x
- 1.13.x
- 1.14.x
- 1.15.x
- tip
before_install:
- cd machinery
- go mod download
script:
- go vet
- go test -race -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s https://codecov.io/bash)

33
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,33 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Golang",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/machinery/main.go",
"args": [
"-action",
"run",
"-port",
"8080"
],
"envFile": "${workspaceFolder}/machinery/.env.local",
"buildFlags": "--tags dynamic",
},
{
"name": "Launch React",
"type": "node",
"request": "launch",
"cwd": "${workspaceFolder}/ui",
"runtimeExecutable": "yarn",
"runtimeArgs": [
"start"
],
}
]
}

View File

@@ -1,6 +1,8 @@
FROM kerberos/base:dc12d68 AS build-machinery
LABEL AUTHOR=Kerberos.io
ARG BASE_IMAGE_VERSION=amd64-ddbe40e
ARG VERSION=0.0.0
FROM kerberos/base:${BASE_IMAGE_VERSION} AS build-machinery
LABEL AUTHOR=uug.ai
ENV GOROOT=/usr/local/go
ENV GOPATH=/go
@@ -10,7 +12,7 @@ ENV GOSUMDB=off
##########################################
# Installing some additional dependencies.
RUN apt-get upgrade -y && apt-get update && apt-get install -y --no-install-recommends \
RUN apt-get upgrade -y && apt-get update && apt-get install -y --fix-missing --no-install-recommends \
git build-essential cmake pkg-config unzip libgtk2.0-dev \
curl ca-certificates libcurl4-openssl-dev libssl-dev libjpeg62-turbo-dev && \
rm -rf /var/lib/apt/lists/*
@@ -20,6 +22,7 @@ RUN apt-get upgrade -y && apt-get update && apt-get install -y --no-install-reco
RUN mkdir -p /go/src/github.com/kerberos-io/agent
COPY machinery /go/src/github.com/kerberos-io/agent/machinery
RUN rm -rf /go/src/github.com/kerberos-io/agent/machinery/.env
##################################################################
# Get the latest commit hash, so we know which version we're running
@@ -32,7 +35,8 @@ RUN cat /go/src/github.com/kerberos-io/agent/machinery/version
RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
go mod download && \
go build -tags timetzdata,netgo,osusergo --ldflags '-s -w -extldflags "-static -latomic"' main.go && \
VERSION=$(cd /go/src/github.com/kerberos-io/agent && git describe --tags --always 2>/dev/null || echo "${VERSION}") && \
go build -tags timetzdata,netgo,osusergo --ldflags "-s -w -X github.com/kerberos-io/agent/machinery/src/utils.VERSION=${VERSION} -extldflags '-static -latomic'" main.go && \
mkdir -p /agent && \
mv main /agent && \
mv version /agent && \
@@ -42,8 +46,7 @@ RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
mkdir -p /agent/data/log && \
mkdir -p /agent/data/recordings && \
mkdir -p /agent/data/capture-test && \
mkdir -p /agent/data/config && \
rm -rf /go/src/gitlab.com/
mkdir -p /agent/data/config
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
@@ -57,18 +60,6 @@ RUN cp -r /agent ./
RUN /dist/agent/main version
###############################################
# Build Bento4 -> we want fragmented mp4 files
ENV BENTO4_VERSION 1.6.0-639
RUN cd /tmp && git clone https://github.com/axiomatic-systems/Bento4 && cd Bento4 && \
git checkout tags/v${BENTO4_VERSION} && \
cd Build && \
cmake -DCMAKE_BUILD_TYPE=Release .. && \
make && \
mv /tmp/Bento4/Build/mp4fragment /dist/agent/ && \
rm -rf /tmp/Bento4
FROM node:18.14.0-alpine3.16 AS build-ui
RUN apk update && apk upgrade --available && sync
@@ -110,7 +101,6 @@ RUN apk update && apk add ca-certificates curl libstdc++ libc6-compat --no-cache
# Try running agent
RUN mv /agent/* /home/agent/
RUN cp /home/agent/mp4fragment /usr/local/bin/
RUN /home/agent/main version
#######################
@@ -147,4 +137,4 @@ HEALTHCHECK CMD curl --fail http://localhost:80 || exit 1
# Leeeeettttt'ssss goooooo!!!
# Run the shizzle from the right working directory.
WORKDIR /home/agent
CMD ["./main", "-action", "run", "-port", "80"]
CMD ["./main", "-action", "run", "-port", "80"]

140
Dockerfile.arm64 Normal file
View File

@@ -0,0 +1,140 @@
ARG BASE_IMAGE_VERSION=arm64-ddbe40e
ARG VERSION=0.0.0
FROM kerberos/base:${BASE_IMAGE_VERSION} AS build-machinery
LABEL AUTHOR=uug.ai
ENV GOROOT=/usr/local/go
ENV GOPATH=/go
ENV PATH=$GOPATH/bin:$GOROOT/bin:/usr/local/lib:$PATH
ENV GOSUMDB=off
##########################################
# Installing some additional dependencies.
RUN apt-get upgrade -y && apt-get update && apt-get install -y --fix-missing --no-install-recommends \
git build-essential cmake pkg-config unzip libgtk2.0-dev \
curl ca-certificates libcurl4-openssl-dev libssl-dev libjpeg62-turbo-dev && \
rm -rf /var/lib/apt/lists/*
##############################################################################
# Copy all the relevant source code in the Docker image, so we can build this.
RUN mkdir -p /go/src/github.com/kerberos-io/agent
COPY machinery /go/src/github.com/kerberos-io/agent/machinery
RUN rm -rf /go/src/github.com/kerberos-io/agent/machinery/.env
##################################################################
# Get the latest commit hash, so we know which version we're running
COPY .git /go/src/github.com/kerberos-io/agent/.git
RUN cd /go/src/github.com/kerberos-io/agent/.git && git log --format="%H" -n 1 | head -c7 > /go/src/github.com/kerberos-io/agent/machinery/version
RUN cat /go/src/github.com/kerberos-io/agent/machinery/version
##################
# Build Machinery
RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
go mod download && \
VERSION=$(cd /go/src/github.com/kerberos-io/agent && git describe --tags --always 2>/dev/null || echo "${VERSION}") && \
go build -tags timetzdata,netgo,osusergo --ldflags "-s -w -X github.com/kerberos-io/agent/machinery/src/utils.VERSION=${VERSION} -extldflags '-static -latomic'" main.go && \
mkdir -p /agent && \
mv main /agent && \
mv version /agent && \
mv data /agent && \
mkdir -p /agent/data/cloud && \
mkdir -p /agent/data/snapshots && \
mkdir -p /agent/data/log && \
mkdir -p /agent/data/recordings && \
mkdir -p /agent/data/capture-test && \
mkdir -p /agent/data/config
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
# Later, it will be copied as the / (root) of the output image.
WORKDIR /dist
RUN cp -r /agent ./
####################################################################################
# This will collect dependent libraries so they're later copied to the final image.
RUN /dist/agent/main version
FROM node:18.14.0-alpine3.16 AS build-ui
RUN apk update && apk upgrade --available && sync
########################
# Build Web (React app)
RUN mkdir -p /go/src/github.com/kerberos-io/agent/machinery/www
COPY ui /go/src/github.com/kerberos-io/agent/ui
RUN cd /go/src/github.com/kerberos-io/agent/ui && rm -rf yarn.lock && yarn config set network-timeout 300000 && \
yarn && yarn build
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
# Later, it will be copied as the / (root) of the output image.
WORKDIR /dist
RUN mkdir -p ./agent && cp -r /go/src/github.com/kerberos-io/agent/machinery/www ./agent/
############################################
# Publish main binary to GitHub release
FROM alpine:latest
############################
# Protect by non-root user.
RUN addgroup -S kerberosio && adduser -S agent -G kerberosio && addgroup agent video
#################################
# Copy files from previous images
COPY --chown=0:0 --from=build-machinery /dist /
COPY --chown=0:0 --from=build-ui /dist /
RUN apk update && apk add ca-certificates curl libstdc++ libc6-compat --no-cache && rm -rf /var/cache/apk/*
##################
# Try running agent
RUN mv /agent/* /home/agent/
RUN /home/agent/main version
#######################
# Make template config
RUN cp /home/agent/data/config/config.json /home/agent/data/config.template.json
###########################
# Set permissions correctly
RUN chown -R agent:kerberosio /home/agent/data
RUN chown -R agent:kerberosio /home/agent/www
###########################
# Grant the necessary root capabilities to the process trying to bind to the privileged port
RUN apk add libcap && setcap 'cap_net_bind_service=+ep' /home/agent/main
###################
# Run non-root user
USER agent
######################################
# By default the app runs on port 80
EXPOSE 80
######################################
# Check if agent is still running
HEALTHCHECK CMD curl --fail http://localhost:80 || exit 1
###################################################
# Leeeeettttt'ssss goooooo!!!
# Run the shizzle from the right working directory.
WORKDIR /home/agent
CMD ["./main", "-action", "run", "-port", "80"]

271
README.md
View File

@@ -17,20 +17,23 @@
<a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
[![donate](https://brianmacdonald.github.io/Ethonate/svg/eth-donate-blue.svg)](https://brianmacdonald.github.io/Ethonate/address#0xf4a759C9436E2280Ea9cdd23d3144D95538fF4bE)
<a target="_blank" href="https://twitter.com/kerberosio?ref_src=twsrc%5Etfw"><img src="https://img.shields.io/twitter/url.svg?label=Follow%20%40kerberosio&style=social&url=https%3A%2F%2Ftwitter.com%2Fkerberosio" alt="Twitter Widget"></a>
[![Discord Shield](https://discordapp.com/api/guilds/1039619181731135499/widget.png?style=shield)](https://discord.gg/Bj77Vqfp2G)
[![kerberosio](https://snapcraft.io/kerberosio/badge.svg)](https://snapcraft.io/kerberosio)
[![Slack invite](https://img.shields.io/badge/join%20kerberos.io%20on%20slack-grey?style=for-the-badge&logo=slack)](https://joinslack.kerberos.io/)
[**Docker Hub**](https://hub.docker.com/r/kerberos/agent) | [**Documentation**](https://doc.kerberos.io) | [**Website**](https://kerberos.io) | [**View Demo**](https://demo.kerberos.io)
> Before you continue, this repository discusses one of the components of the Kerberos.io stack, the Kerberos Agent, in depth. If you are [looking for an end-to-end deployment guide have a look here](https://github.com/kerberos-io/deployment).
Kerberos Agent is an isolated and scalable video (surveillance) management agent made available as Open Source under the MIT License. This means that all the source code is available for you or your company, and you can use, transform and distribute the source code; as long you keep a reference of the original license. Kerberos Agent can be used for commercial usage (which was not the case for v2). Read more [about the license here](LICENSE).
![Kerberos Agent go through UI](./assets/img/kerberos-agent-overview.gif)
## :thinking: Prerequisites
- An IP camera which supports a RTSP H264 encoded stream,
- (or) a USB camera, Raspberry Pi camera or other camera, that [you can tranform to a valid RTSP H264 stream](https://github.com/kerberos-io/camera-to-rtsp).
- Any hardware (ARMv6, ARMv7, ARM64, AMD) that can run a binary or container, for example: a Raspberry Pi, NVidia Jetson, Intel NUC, a VM, Bare metal machine or a full blown Kubernetes cluster.
- An IP camera which supports a RTSP H264 or H265 encoded stream,
- (or) a USB camera, Raspberry Pi camera or other camera, that [you can transform to a valid RTSP H264 or H265 stream](https://github.com/kerberos-io/camera-to-rtsp).
- Any hardware (ARMv6, ARMv7, ARM64, AMD64) that can run a binary or container, for example: a Raspberry Pi, NVidia Jetson, Intel NUC, a VM, Bare metal machine or a full blown Kubernetes cluster.
## :video_camera: Is my camera working?
@@ -46,41 +49,46 @@ There are a myriad of cameras out there (USB, IP and other cameras), and it migh
### Introduction
3. [A world of Kerberos Agents](#a-world-of-kerberos-agents)
1. [A world of Kerberos Agents](#a-world-of-kerberos-agents)
### Running and automation
4. [How to run and deploy a Kerberos Agent](#how-to-run-and-deploy-a-kerberos-agent)
5. [Access the Kerberos Agent](#access-the-kerberos-agent)
6. [Configure and persist with volume mounts](#configure-and-persist-with-volume-mounts)
7. [Configure with environment variables](#configure-with-environment-variables)
1. [How to run and deploy a Kerberos Agent](#how-to-run-and-deploy-a-kerberos-agent)
2. [Access the Kerberos Agent](#access-the-kerberos-agent)
3. [Configure and persist with volume mounts](#configure-and-persist-with-volume-mounts)
4. [Configure with environment variables](#configure-with-environment-variables)
### Insights
1. [Encryption](#encryption)
2. [H264 vs H265](#h264-vs-h265)
### Contributing
8. [Contribute with Codespaces](#contribute-with-codespaces)
9. [Develop and build](#develop-and-build)
10. [Building from source](#building-from-source)
11. [Building for Docker](#building-for-docker)
1. [Contribute with Codespaces](#contribute-with-codespaces)
2. [Develop and build](#develop-and-build)
3. [Building from source](#building-from-source)
4. [Building for Docker](#building-for-docker)
### Varia
12. [Support our project](#support-our-project)
13. [What is new?](#what-is-new)
14. [Contributors](#contributors)
1. [Support our project](#support-our-project)
1. [What is new?](#what-is-new)
1. [Contributors](#contributors)
## Quickstart - Docker
The easiest to get your Kerberos Agent up and running is to use our public image on [Docker hub](https://hub.docker.com/r/kerberos/agent). Once you have selected a specific tag, run below `docker` command, which will open the web interface of your Kerberos agent on port `80`, and off you go. For a more configurable and persistent deployment have a look at [Running and automating a Kerberos Agent](#running-and-automating-a-kerberos-agent).
The easiest way to get your Kerberos Agent up and running is to use our public image on [Docker hub](https://hub.docker.com/r/kerberos/agent). Once you have selected a specific tag, run `docker` command below, which will open the web interface of your Kerberos agent on port `80`, and off you go. For a more configurable and persistent deployment have a look at [Running and automating a Kerberos Agent](#running-and-automating-a-kerberos-agent).
docker run -p 80:80 --name mycamera -d --restart=always kerberos/agent:latest
If you want to connect to an USB or Raspberry Pi camera, [you'll need to run our side car container](https://github.com/kerberos-io/camera-to-rtsp) which proxy the camera to an RTSP stream. In that case you'll want to configure the Kerberos Agent container to run in the host network, so it can connect directly to the RTSP sidecar.
If you want to connect to a USB or Raspberry Pi camera, [you'll need to run our side car container](https://github.com/kerberos-io/camera-to-rtsp) which proxies the camera to an RTSP stream. In that case you'll want to configure the Kerberos Agent container to run in the host network, so it can connect directly to the RTSP sidecar.
docker run --network=host --name mycamera -d --restart=always kerberos/agent:latest
## Quickstart - Balena
Run Kerberos Agent with [Balena Cloud](https://www.balena.io/) super powers. Monitor your Kerberos Agent with seamless remote access, over the air updates, an encrypted public `https` endpoint and many more. Checkout our application `video-surveillance` on [Balena Hub](https://hub.balena.io/apps/2064752/video-surveillance), and create your first or fleet of Kerberos Agent(s).
Run Kerberos Agent with [Balena Cloud](https://www.balena.io/) super powers. Monitor your Kerberos Agent with seamless remote access, over the air updates, an encrypted public `https` endpoint and much more. Checkout our application `video-surveillance` on [Balena Hub](https://hub.balena.io/apps/2064752/video-surveillance), and create your first or fleet of Kerberos Agent(s).
[![deploy with balena](https://balena.io/deploy.svg)](https://dashboard.balena-cloud.com/deploy?repoUrl=https://github.com/kerberos-io/balena-agent)
@@ -96,31 +104,37 @@ Once installed you can find your Kerberos Agent configration at `/var/snap/kerbe
## A world of Kerberos Agents
The Kerberos Agent is an isolated and scalable video (surveillance) management agent with a strong focus on user experience, scalability, resilience, extension and integration. Next to the Kerberos Agent, Kerberos.io provides many other tools such as [Kerberos Factory](https://github.com/kerberos-io/factory), [Kerberos Vault](https://github.com/kerberos-io/vault) and [Kerberos Hub](https://github.com/kerberos-io/hub) to provide additional capabilities: bring your own cloud, bring your own storage, central overview, live streaming, machine learning etc.
The Kerberos Agent is an isolated and scalable video (surveillance) management agent with a strong focus on user experience, scalability, resilience, extension and integration. Next to the Kerberos Agent, Kerberos.io provides many other tools such as [Kerberos Factory](https://github.com/kerberos-io/factory), [Kerberos Vault](https://github.com/kerberos-io/vault), and [Kerberos Hub](https://github.com/kerberos-io/hub) to provide additional capabilities: bring your own cloud, bring your own storage, central overview, live streaming, machine learning, etc.
As mentioned above Kerberos.io applies the concept of agents. An agent is running next to (or on) your camera, and is processing a single camera feed. It applies motion based or continuous recording and make those recordings available through a user friendly web interface. A Kerberos Agent allows you to connect to other cloud services or integrates with custom applications. Kerberos Agent is used for personal usage and scales to enterprise production level deployments.
[![Deployment Agent](./assets/img/edge-deployment-agent.svg)](https://github.com/kerberos-io/deployment)
As mentioned above Kerberos.io applies the concept of agents. An agent is running next to (or on) your camera, and is processing a single camera feed. It applies motion based or continuous recording and makes those recordings available through a user friendly web interface. A Kerberos Agent allows you to connect to other cloud services or integrate with custom applications. Kerberos Agent is used for personal applications and scales to enterprise production level deployments. Learn more about the [deployment strategies here](<(https://github.com/kerberos-io/deployment)>).
This repository contains everything you'll need to know about our core product, Kerberos Agent. Below you'll find a brief list of features and functions.
- Low memory and CPU usage.
- Simplified and modern user interface.
- Multi architecture (ARMv7, ARMv8, amd64, etc).
- Multi camera support: IP Cameras (H264), USB cameras and Raspberry Pi Cameras [through a RTSP proxy](https://github.com/kerberos-io/camera-to-rtsp).
- Multi architecture (ARMv6, ARMv7, ARM64, AMD64)
- Multi stream, for example recording in H265, live streaming and motion detection in H264.
- Multi camera support: IP Cameras (H264 and H265), USB cameras and Raspberry Pi Cameras [through a RTSP proxy](https://github.com/kerberos-io/camera-to-rtsp).
- Single camera per instance (e.g. one container per camera).
- Primary and secondary stream setup (record full-res, stream low-res).
- Low resolution streaming through MQTT and full resolution streaming through WebRTC.
- Ability to specifiy conditions: offline mode, motion region, time table, continuous recording, etc.
- Post- and pre-recording on motion detection.
- Ability to create fragmented recordings, and streaming though HLS fMP4.
- Low resolution streaming through MQTT and high resolution streaming through WebRTC (only supports H264/PCM).
- Backchannel audio from Kerberos Hub to IP camera (requires PCM ULAW codec)
- Audio (AAC) and video (H264/H265) recording in MP4 container.
- End-to-end encryption through MQTT using RSA and AES (livestreaming, ONVIF, remote configuration, etc)
- Conditional recording: offline mode, motion region, time table, continuous recording, webhook condition etc.
- Post- and pre-recording for motion detection.
- Encryption at rest using AES-256-CBC.
- Ability to create fragmented recordings, and streaming through HLS fMP4.
- [Deploy where you want](#how-to-run-and-deploy-a-kerberos-agent) with the tools you use: `docker`, `docker compose`, `ansible`, `terraform`, `kubernetes`, etc.
- Cloud storage/persistance: Kerberos Hub, Kerberos Vault and Dropbox. [(WIP: Minio, Storj, Google Drive, FTP etc.)](https://github.com/kerberos-io/agent/issues/95)
- WIP: Integrations (Webhooks, MQTT, Script, etc).
- Outputs: trigger an integration (Webhooks, MQTT, Script, etc) when a specific event (motion detection or start recording ) occurs
- REST API access and documentation through Swagger (trigger recording, update configuration, etc).
- MIT License
## How to run and deploy a Kerberos Agent
As described before a Kerberos Agent is a container, which can be deployed through various ways and automation tools such as `docker`, `docker compose`, `kubernetes` and the list goes on. To simplify your life we have come with concrete and working examples of deployments to help you speed up your Kerberos.io journey.
A Kerberos Agent, as previously mentioned, is a container. You can deploy it using various methods and automation tools, including `docker`, `docker compose`, `kubernetes` and more. To streamline your Kerberos.io experience, we provide concrete deployment examples to speed up your Kerberos.io journey
We have documented the different deployment models [in the `deployments` directory](https://github.com/kerberos-io/agent/tree/master/deployments) of this repository. There you'll learn and find how to deploy using:
@@ -134,7 +148,7 @@ We have documented the different deployment models [in the `deployments` directo
- [Balena](https://github.com/kerberos-io/agent/tree/master/deployments#8-balena)
- [Snap](https://github.com/kerberos-io/agent/tree/master/deployments#9-snap)
By default your Kerberos Agents will store all its configuration and recordings inside the container. To help you automate and have a more consistent data governance, you can attach volumes to configure and persist data of your Kerberos Agents, and/or configure each Kerberos Agent through environment variables.
By default, your Kerberos Agents store all configuration and recordings within the container. To help you automate and have a more consistent data governance, you can attach volumes to configure and persist data of your Kerberos Agents and/or configure each Kerberos Agent through environment variables.
## Access the Kerberos Agent
@@ -149,23 +163,23 @@ The default username and password for the Kerberos Agent is:
## Configure and persist with volume mounts
An example of how to mount a host directory is shown below using `docker`, but is applicable for [all the deployment models and tools described above](#running-and-automating-a-kerberos-agent).
An example of how to mount a host directory is shown below using `docker`, but is applicable for [all of the deployment models and tools described above](#running-and-automating-a-kerberos-agent).
You attach a volume to your container by leveraging the `-v` option. To mount your own configuration file and recordings folder, execute as following:
You attach a volume to your container by leveraging the `-v` option. To mount your own configuration file and recordings folder, run the following commands:
docker run -p 80:80 --name mycamera \
-v $(pwd)/agent/config:/home/agent/data/config \
-v $(pwd)/agent/recordings:/home/agent/data/recordings \
-d --restart=always kerberos/agent:latest
More example [can be found in the deployment section](https://github.com/kerberos-io/agent/tree/master/deployments) for each deployment and automation tool. Please note to verify the permissions of the directory/volume you are attaching. More information in [this issue](https://github.com/kerberos-io/agent/issues/80).
More examples for each deployment and automation tool [can be found in the deployment section](https://github.com/kerberos-io/agent/tree/master/deployments). Be sure to verify the permissions of the directory/volume you are attaching. More information in [this issue](https://github.com/kerberos-io/agent/issues/80).
chmod -R 755 kerberos-agent/
chown 100:101 kerberos-agent/ -R
## Configure with environment variables
Next to attaching the configuration file, it is also possible to override the configuration with environment variables. This makes deployments easier when leveraging `docker compose` or `kubernetes` deployments much easier and scalable. Using this approach we simplify automation through `ansible` and `terraform`.
Next to attaching the configuration file, it is also possible to override the configuration with environment variables. This makes deploying with `docker compose` or `kubernetes` much easier and more scalable. Using this approach, we simplify automation through `ansible` and `terraform`.
docker run -p 80:80 --name mycamera \
-e AGENT_NAME=mycamera \
@@ -176,63 +190,122 @@ Next to attaching the configuration file, it is also possible to override the co
| Name | Description | Default Value |
| --------------------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------ |
| `AGENT_MODE` | You can choose to run this in 'release' for production, and or 'demo' for showcasing. | "release" |
| `AGENT_TLS_INSECURE` | Specify if you want to use `InsecureSkipVerify` for the internal HTTP client. | "false" |
| `AGENT_USERNAME` | The username used to authenticate against the Kerberos Agent login page. | "root" |
| `AGENT_PASSWORD` | The password used to authenticate against the Kerberos Agent login page. | "root" |
| `AGENT_KEY` | A unique identifier for your Kerberos Agent, this is auto-generated but can be overriden. | "" |
| `AGENT_NAME` | The agent friendly-name. | "agent" |
| `AGENT_TIMEZONE` | Timezone which is used for converting time. | "Africa/Ceuta" |
| `AGENT_REMOVE_AFTER_UPLOAD` | When enabled, recordings uploaded successfully to a storage will be removed from disk. | "true" |
| `AGENT_OFFLINE` | Makes sure no external connection is made. | "false" |
| `AGENT_AUTO_CLEAN` | Cleans up the recordings directory. | "true" |
| `AGENT_AUTO_CLEAN_MAX_SIZE` | If `AUTO_CLEAN` enabled, set the max size of the recordings directory in (MB). | "100" |
| `AGENT_TIME` | Enable the timetable for Kerberos Agent | "false" |
| `AGENT_TIMETABLE` | A (weekly) time table to specify when to make recordings "start1,end1,start2,end2;start1.. | "" |
| `AGENT_REGION_POLYGON` | A single polygon set for motion detection: "x1,y1;x2,y2;x3,y3;... | "" |
| `AGENT_CAPTURE_IPCAMERA_RTSP` | Full-HD RTSP endpoint to the camera you're targetting. | "" |
| `AGENT_CAPTURE_IPCAMERA_SUB_RTSP` | Sub-stream RTSP endpoint used for livestreaming (WebRTC). | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF` | Mark as a compliant ONVIF device. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_XADDR` | ONVIF endpoint/address running on the camera. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_USERNAME` | ONVIF username to authenticate against. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_PASSWORD` | ONVIF password to authenticate against. | "" |
| `AGENT_CAPTURE_MOTION` | Toggle for enabling or disabling motion. | "true" |
| `AGENT_CAPTURE_LIVEVIEW` | Toggle for enabling or disabling liveview. | "true" |
| `AGENT_CAPTURE_SNAPSHOTS` | Toggle for enabling or disabling snapshot generation. | "true" |
| `AGENT_CAPTURE_RECORDING` | Toggle for enabling making recordings. | "true" |
| `AGENT_CAPTURE_CONTINUOUS` | Toggle for enabling continuous "true" or motion "false". | "false" |
| `AGENT_CAPTURE_PRERECORDING` | If `CONTINUOUS` set to `false`, specify the recording time (seconds) before after motion event. | "10" |
| `AGENT_CAPTURE_POSTRECORDING` | If `CONTINUOUS` set to `false`, specify the recording time (seconds) after motion event. | "20" |
| `AGENT_CAPTURE_MAXLENGTH` | The maximum length of a single recording (seconds). | "30" |
| `AGENT_CAPTURE_PIXEL_CHANGE` | If `CONTINUOUS` set to `false`, the number of pixel require to change before motion triggers. | "150" |
| `AGENT_CAPTURE_FRAGMENTED` | Set the format of the recorded MP4 to fragmented (suitable for HLS). | "false" |
| `AGENT_CAPTURE_FRAGMENTED_DURATION` | If `AGENT_CAPTURE_FRAGMENTED` set to `true`, define the duration (seconds) of a fragment. | "8" |
| `AGENT_MQTT_URI` | A MQTT broker endpoint that is used for bi-directional communication (live view, onvif, etc) | "tcp://mqtt.kerberos.io:1883" |
| `AGENT_MQTT_USERNAME` | Username of the MQTT broker. | "" |
| `AGENT_MQTT_PASSWORD` | Password of the MQTT broker. | "" |
| `AGENT_STUN_URI` | When using WebRTC, you'll need to provide a STUN server. | "stun:turn.kerberos.io:8443" |
| `AGENT_TURN_URI` | When using WebRTC, you'll need to provide a TURN server. | "turn:turn.kerberos.io:8443" |
| `AGENT_TURN_USERNAME` | TURN username used for WebRTC. | "username1" |
| `AGENT_TURN_PASSWORD` | TURN password used for WebRTC. | "password1" |
| `AGENT_CLOUD` | Store recordings in Kerberos Hub (s3), Kerberos Vault (kstorage) or Dropbox (dropbox). | "s3" |
| `AGENT_HUB_URI` | The Kerberos Hub API, defaults to our Kerberos Hub SAAS. | "https://api.hub.domain.com" |
| `AGENT_HUB_KEY` | The access key linked to your account in Kerberos Hub. | "" |
| `AGENT_HUB_PRIVATE_KEY` | The secret access key linked to your account in Kerberos Hub. | "" |
| `AGENT_HUB_REGION` | The Kerberos Hub region, to which you want to upload. | "" |
| `AGENT_HUB_SITE` | The site ID of a site you've created in your Kerberos Hub account. | "" |
| `AGENT_KERBEROSVAULT_URI` | The Kerberos Vault API url. | "https://vault.domain.com/api" |
| `AGENT_KERBEROSVAULT_ACCESS_KEY` | The access key of a Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_SECRET_KEY` | The secret key of a Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_PROVIDER` | A Kerberos Vault provider you have created (optional). | "" |
| `AGENT_KERBEROSVAULT_DIRECTORY` | The directory, in the provider, where the recordings will be stored in. | "" |
| `AGENT_DROPBOX_ACCESS_TOKEN` | The Access Token from your Dropbox app, that is used to leverage the Dropbox SDK. | "" |
| `AGENT_DROPBOX_DIRECTORY` | The directory, in the provider, where the recordings will be stored in. | "" |
| `LOG_LEVEL` | Level for logging, could be "info", "warning", "debug", "error" or "fatal". | "info" |
| `LOG_OUTPUT` | Logging output format "json" or "text". | "text" |
| `AGENT_MODE` | You can choose to run this in 'release' for production, and or 'demo' for showcasing. | "release" |
| `AGENT_TLS_INSECURE` | Specify if you want to use `InsecureSkipVerify` for the internal HTTP client. | "false" |
| `AGENT_USERNAME` | The username used to authenticate against the Kerberos Agent login page. | "root" |
| `AGENT_PASSWORD` | The password used to authenticate against the Kerberos Agent login page. | "root" |
| `AGENT_KEY` | A unique identifier for your Kerberos Agent, this is auto-generated but can be overriden. | "" |
| `AGENT_NAME` | The agent friendly-name. | "agent" |
| `AGENT_TIMEZONE` | Timezone which is used for converting time. | "Africa/Ceuta" |
| `AGENT_REMOVE_AFTER_UPLOAD` | When enabled, recordings uploaded successfully to a storage will be removed from disk. | "true" |
| `AGENT_OFFLINE` | Makes sure no external connection is made. | "false" |
| `AGENT_AUTO_CLEAN` | Cleans up the recordings directory. | "true" |
| `AGENT_AUTO_CLEAN_MAX_SIZE` | If `AUTO_CLEAN` enabled, set the max size of the recordings directory (in MB). | "100" |
| `AGENT_TIME` | Enable the timetable for Kerberos Agent | "false" |
| `AGENT_TIMETABLE` | A (weekly) time table to specify when to make recordings "start1,end1,start2,end2;start1.. | "" |
| `AGENT_REGION_POLYGON` | A single polygon set for motion detection: "x1,y1;x2,y2;x3,y3;... | "" |
| `AGENT_CAPTURE_IPCAMERA_RTSP` | Full-HD RTSP endpoint to the camera you're targetting. | "" |
| `AGENT_CAPTURE_IPCAMERA_SUB_RTSP` | Sub-stream RTSP endpoint used for livestreaming (WebRTC). | "" |
| `AGENT_CAPTURE_IPCAMERA_BASE_WIDTH` | Force a specific width resolution for live view processing. | "" |
| `AGENT_CAPTURE_IPCAMERA_BASE_HEIGHT` | Force a specific height resolution for live view processing. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF` | Mark as a compliant ONVIF device. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_XADDR` | ONVIF endpoint/address running on the camera. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_USERNAME` | ONVIF username to authenticate against. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_PASSWORD` | ONVIF password to authenticate against. | "" |
| `AGENT_CAPTURE_MOTION` | Toggle for enabling or disabling motion. | "true" |
| `AGENT_CAPTURE_LIVEVIEW` | Toggle for enabling or disabling liveview. | "true" |
| `AGENT_CAPTURE_SNAPSHOTS` | Toggle for enabling or disabling snapshot generation. | "true" |
| `AGENT_CAPTURE_RECORDING` | Toggle for enabling making recordings. | "true" |
| `AGENT_CAPTURE_CONTINUOUS` | Toggle for enabling continuous "true" or motion "false". | "false" |
| `AGENT_CAPTURE_PRERECORDING` | If `CONTINUOUS` set to `false`, specify the recording time (seconds) before/after motion event. | "10" |
| `AGENT_CAPTURE_POSTRECORDING` | If `CONTINUOUS` set to `false`, specify the recording time (seconds) after motion event. | "20" |
| `AGENT_CAPTURE_MAXLENGTH` | The maximum length of a single recording (seconds). | "30" |
| `AGENT_CAPTURE_PIXEL_CHANGE` | If `CONTINUOUS` set to `false`, the number of pixel require to change before motion triggers. | "150" |
| `AGENT_CAPTURE_FRAGMENTED` | Set the format of the recorded MP4 to fragmented (suitable for HLS). | "false" |
| `AGENT_CAPTURE_FRAGMENTED_DURATION` | If `AGENT_CAPTURE_FRAGMENTED` set to `true`, define the duration (seconds) of a fragment. | "8" |
| `AGENT_MQTT_URI` | An MQTT broker endpoint that is used for bi-directional communication (live view, onvif, etc) | "tcp://mqtt.kerberos.io:1883" |
| `AGENT_MQTT_USERNAME` | Username of the MQTT broker. | "" |
| `AGENT_MQTT_PASSWORD` | Password of the MQTT broker. | "" |
| `AGENT_REALTIME_PROCESSING` | If `AGENT_REALTIME_PROCESSING` set to `true`, the agent will send key frames to the topic | "" |
| `AGENT_REALTIME_PROCESSING_TOPIC` | The topic to which keyframes will be sent in base64 encoded format. | "" |
| `AGENT_STUN_URI` | When using WebRTC, you'll need to provide a STUN server. | "stun:turn.kerberos.io:8443" |
| `AGENT_FORCE_TURN` | Force using a TURN server, by generating relay candidates only. | "false" |
| `AGENT_TURN_URI` | When using WebRTC, you'll need to provide a TURN server. | "turn:turn.kerberos.io:8443" |
| `AGENT_TURN_USERNAME` | TURN username used for WebRTC. | "username1" |
| `AGENT_TURN_PASSWORD` | TURN password used for WebRTC. | "password1" |
| `AGENT_CLOUD` | Store recordings in Kerberos Hub (s3), Kerberos Vault (kstorage), or Dropbox (dropbox). | "s3" |
| `AGENT_HUB_ENCRYPTION` | Turning on/off encryption of traffic from your Kerberos Agent to Kerberos Hub. | "true" |
| `AGENT_HUB_URI` | The Kerberos Hub API, defaults to our Kerberos Hub SAAS. | "https://api.hub.domain.com" |
| `AGENT_HUB_KEY` | The access key linked to your account in Kerberos Hub. | "" |
| `AGENT_HUB_PRIVATE_KEY` | The secret access key linked to your account in Kerberos Hub. | "" |
| `AGENT_HUB_REGION` | The Kerberos Hub region, to which you want to upload. | "" |
| `AGENT_HUB_SITE` | The site ID of a site you've created in your Kerberos Hub account. | "" |
| `AGENT_KERBEROSVAULT_URI` | The Kerberos Vault API url. | "https://vault.domain.com/api" |
| `AGENT_KERBEROSVAULT_ACCESS_KEY` | The access key of a Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_SECRET_KEY` | The secret key of a Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_PROVIDER` | A Kerberos Vault provider you have created (optional). | "" |
| `AGENT_KERBEROSVAULT_DIRECTORY` | The directory, in the Kerberos vault, where the recordings will be stored. | "" |
| `AGENT_KERBEROSVAULT_SECONDARY_URI` | The Kerberos Vault API url. | "https://vault.domain.com/api" |
| `AGENT_KERBEROSVAULT_SECONDARY_ACCESS_KEY` | The access key of a secondary Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_SECONDARY_SECRET_KEY` | The secret key of a secondary Kerberos Vault account. | "" |
| `AGENT_KERBEROSVAULT_SECONDARY_PROVIDER` | A secondary Kerberos Vault provider you have created (optional). | "" |
| `AGENT_KERBEROSVAULT_SECONDARY_DIRECTORY` | The directory, in the secondary Kerberos vault, where the recordings will be stored. | "" |
| `AGENT_DROPBOX_ACCESS_TOKEN` | The Access Token from your Dropbox app, that is used to leverage the Dropbox SDK. | "" |
| `AGENT_DROPBOX_DIRECTORY` | The directory, in Dropbox, where the recordings will be stored. | "" |
| `AGENT_ENCRYPTION` | Enable 'true' or disable 'false' end-to-end encryption for MQTT messages. | "false" |
| `AGENT_ENCRYPTION_RECORDINGS` | Enable 'true' or disable 'false' end-to-end encryption for recordings. | "false" |
| `AGENT_ENCRYPTION_FINGERPRINT` | The fingerprint of the keypair (public/private keys), so you know which one to use. | "" |
| `AGENT_ENCRYPTION_PRIVATE_KEY` | The private key (assymetric/RSA) to decrypt and sign requests send over MQTT. | "" |
| `AGENT_ENCRYPTION_SYMMETRIC_KEY` | The symmetric key (AES) to encrypt and decrypt requests sent over MQTT. | "" |
| `AGENT_SIGNING` | Enable 'true' or disable 'false' for signing recordings. | "true" |
| `AGENT_SIGNING_PRIVATE_KEY` | The private key (RSA) to sign the recordings fingerprint to validate origin. | "" - uses default one if empty |
## Encryption
You can encrypt your recordings and outgoing MQTT messages with your own AES and RSA keys by enabling the encryption settings. Once enabled, all your recordings will be encrypted using AES-256-CBC and your symmetric key. You can use the default `openssl` toolchain to decrypt the recordings with your AES key, as following:
openssl aes-256-cbc -d -md md5 -in encrypted.mp4 -out decrypted.mp4 -k your-key-96ab185xxxxxxxcxxxxxxxx6a59c62e8
Or you can decrypt a folder of recordings, using the Kerberos Agent binary as following:
go run main.go -action decrypt ./data/recordings your-key-96ab185xxxxxxxcxxxxxxxx6a59c62e8
Or for a single file:
go run main.go -action decrypt ./data/recordings/video.mp4 your-key-96ab185xxxxxxxcxxxxxxxx6a59c62e8
## H264 vs H265
If we talk about video encoders and decoders (codecs) there are 2 major video codecs on the market: H264 and H265. Taking into account your use case, you might use one over the other. We will provide an (not complete) overview of the advantages and disadvantages of each codec in the field of video surveillance and video analytics. If you would like to know more, you should look for additional resources on the internet (or if you like to read physical items, books still exists nowadays).
- H264 (also known as AVC or MPEG-4 Part 10)
- Is the most common one and most widely supported for IP cameras.
- Supported in the majority of browsers, operating system, and third-party applications.
- Can be embedded in commercial and 3rd party applications.
- Different levels of compression (high, medium, low, ..)
- Better quality / compression ratio, shows less artifacts at medium compression ratios.
- Does support technologies such as WebRTC
- H265 (also known as HEVC)
- Is not supported on legacy cameras, though becoming rapidly available on "newer" IP cameras.
- Might not always be supported due to licensing. For example not supported in browers on a Linux distro.
- Requires licensing when embedding in a commercial product (be careful).
- Higher levels of compression (50% more than H264).
- H265 shows artifacts in motion based environments (which is less with H264).
- Recording the same video (resolution, duration and FPS) in H264 and H265 will result in approx 50% the file size.
- Not supported in technologies such as WebRTC
Conclusion: depending on the use case you might choose one over the other, and you can use both at the same time. For example you can use H264 (main stream) for livestreaming, and H265 (sub stream) for recording. If you wish to play recordings in a cross-platform and cross-browser environment, you might opt for H264 for better support.
## Contribute with Codespaces
One of the major blockers for letting you contribute to an Open Source project is to setup your local development machine. Why? Because you might have already some tools and libraries installed that are used for other projects, and the libraries you would need for Kerberos Agent, for example FFmpeg, might require a different version. Welcome to the dependency hell..
One of the major blockers for letting you contribute to an Open Source project is to set up your local development machine. Why? Because you might already have some tools and libraries installed that are used for other projects, and the libraries you would need for Kerberos Agent, for example FFmpeg, might require a different version. Welcome to dependency hell...
By leveraging codespaces, which the Kerberos Agent repo supports, you will be able to setup the required development environment in a few minutes. By opening the `<> Code` tab on the top of the page, you will be able to create a codespace, [using the Kerberos Devcontainer](https://github.com/kerberos-io/devcontainer) base image. This image requires all the relevant dependencies: FFmpeg, OpenCV, Golang, Node, Yarn, etc.
By leveraging codespaces, which the Kerberos Agent repo supports, you will be able to set up the required development environment in a few minutes. By opening the `<> Code` tab on the top of the page, you will be able to create a codespace, [using the Kerberos Devcontainer](https://github.com/kerberos-io/devcontainer) base image. This image requires all the relevant dependencies: FFmpeg, OpenCV, Golang, Node, Yarn, etc.
![Kerberos Agent codespace](assets/img/codespace.png)
@@ -259,7 +332,7 @@ On opening of the GitHub Codespace, some dependencies will be installed. Once th
WS_URL: `${websocketprotocol}//${externalHost}/ws`,
};
Go and open two terminals one for the `ui` project and one for the `machinery` project.
Go and open two terminals: one for the `ui` project and one for the `machinery` project.
1. Terminal A:
@@ -275,11 +348,11 @@ Once executed, a popup will show up mentioning `portforwarding`. You should see
![Codespace make public](./assets/img/codespace-make-public.png)
As mentioned above, copy the hostname of the `machinery` DNS name, and past it in the `ui/src/config.json` file. Once done reload, the `ui` page in your browser, and you should be able to access the login page with the default credentials `root` and `root`.
As mentioned above, copy the hostname of the `machinery` DNS name, and paste it in the `ui/src/config.json` file. Once done, reload the `ui` page in your browser, and you should be able to access the login page with the default credentials `root` and `root`.
## Develop and build
Kerberos Agent is divided in two parts a `machinery` and `web`. Both parts live in this repository in their relative folders. For development or running the application on your local machine, you have to run both the `machinery` and the `web` as described below. When running in production everything is shipped as only one artifact, read more about this at [Building for production](#building-for-production).
The Kerberos Agent is divided in two parts: a `machinery` and `web` part. Both parts live in this repository in their relative folders. For development or running the application on your local machine, you have to run both the `machinery` and the `web` as described below. When running in production everything is shipped as only one artifact, read more about this at [Building for production](#building-for-production).
### UI
@@ -293,13 +366,13 @@ This will start a webserver and launches the web app on port `3000`.
![login-agent](./assets/img/agent-login.gif)
Once signed in you'll see the dashboard page showing up. After successfull configuration of your agent, you'll should see a live view and possible events recorded to disk.
Once signed in you'll see the dashboard page. After successfull configuration of your agent, you'll should see a live view and possible events recorded to disk.
![dashboard-agent](./assets/img/agent-dashboard.png)
### Machinery
The `machinery` is a **Golang** project which delivers two functions: it acts as the Kerberos Agent which is doing all the heavy lifting with camera processing and other kinds of logic, on the other hand it acts as a webserver (Rest API) that allows communication from the web (React) or any other custom application. The API is documented using `swagger`.
The `machinery` is a **Golang** project which delivers two functions: it acts as the Kerberos Agent which is doing all the heavy lifting with camera processing and other kinds of logic and on the other hand it acts as a webserver (Rest API) that allows communication from the web (React) or any other custom application. The API is documented using `swagger`.
You can simply run the `machinery` using following commands.
@@ -307,13 +380,13 @@ You can simply run the `machinery` using following commands.
cd machinery
go run main.go -action run -port 80
This will launch the Kerberos Agent and run a webserver on port `80`. You can change the port by your own preference. We strongly support the usage of [Goland](https://www.jetbrains.com/go/) or [Visual Studio Code](https://code.visualstudio.com/), as it comes with all the debugging and linting features builtin.
This will launch the Kerberos Agent and run a webserver on port `80`. You can change the port by your own preference. We strongly support the usage of [Goland](https://www.jetbrains.com/go/) or [Visual Studio Code](https://code.visualstudio.com/), as it comes with all the debugging and linting features built in.
![VSCode desktop](./assets/img/vscode-desktop.png)
## Building from source
Running Kerberos Agent in production only require a single binary to run. Nevertheless, we have two parts, the `machinery` and the `web`, we merge them during build time. So this is what happens.
Running Kerberos Agent in production only requires a single binary to run. Nevertheless, we have two parts: the `machinery` and the `web`, we merge them during build time. So this is what happens.
### UI
@@ -324,7 +397,7 @@ To build the Kerberos Agent web app, you simply have to run the `build` command
### Machinery
Building the `machinery` is also super easy 🚀, by using `go build` you can create a single binary which ships it all; thank you Golang. After building you will endup with a binary called `main`, this is what contains everything you need to run Kerberos Agent.
Building the `machinery` is also super easy 🚀, by using `go build` you can create a single binary which ships it all; thank you Golang. After building you will end up with a binary called `main`, this is what contains everything you need to run Kerberos Agent.
Remember the build step of the `web` part, during build time we move the build directory to the `machinery` directory. Inside the `machinery` web server [we reference the](https://github.com/kerberos-io/agent/blob/master/machinery/src/routers/http/Server.go#L44) `build` directory. This makes it possible to just a have single web server that runs it all.
@@ -333,8 +406,8 @@ Remember the build step of the `web` part, during build time we move the build d
## Building for Docker
Inside the root of this `agent` repository, you will find a `Dockerfile`. This file contains the instructions for building and shipping **Kerberos Agent**. Important to note is that start from a prebuild base image, `kerberos/base:xxx`.
This base image contains already a couple of tools, such as Golang, FFmpeg and OpenCV. We do this for faster compilation times.
Inside the root of this `agent` repository, you will find a `Dockerfile`. This file contains the instructions for building and shipping a **Kerberos Agent**. Important to note is that you start from a prebuilt base image, `kerberos/base:xxx`.
This base image already contains a couple of tools, such as Golang, FFmpeg and OpenCV. We do this for faster compilation times.
By running the `docker build` command, you will create the Kerberos Agent Docker image. After building you can simply run the image as a Docker container.
@@ -350,7 +423,7 @@ Read more about this [at the FAQ](#faq) below.
## Contributors
This project exists thanks to all the people who contribute.
This project exists thanks to all the people who contribute. Bravo!
<a href="https://github.com/kerberos-io/agent/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kerberos-io/agent" />

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 696 KiB

View File

@@ -1,10 +0,0 @@
export version=0.0.1
export name=agent
docker build -t $name .
docker tag $name kerberos/$name:$version
docker push kerberos/$name:$version
docker tag $name kerberos/$name:latest
docker push kerberos/$name:latest

View File

@@ -9,7 +9,7 @@ Kerberos Agents are now also shipped as static binaries. Within the Docker image
You can run the binary as following on port `8080`:
main run cameraname 8080
main -action=run -port=80
## Systemd
@@ -18,7 +18,7 @@ When running on a Linux OS you might consider to auto-start the Kerberos Agent u
[Unit]
Wants=network.target
[Service]
ExecStart=/home/pi/agent/main run camera 80
ExecStart=/home/pi/agent/main -action=run -port=80
WorkingDirectory=/home/pi/agent/
[Install]
WantedBy=multi-user.target

View File

@@ -36,12 +36,12 @@ You attach a volume to your container by leveraging the `-v` option. To mount yo
docker run -p 80:80 --name mycamera \
-v $(pwd)/agent/config:/home/agent/data/config \
-v $(pwd)/agent/recordings:/home/agent/data/recordings\
-d --restart=alwayskerberos/agent:latest
-v $(pwd)/agent/recordings:/home/agent/data/recordings \
-d --restart=always kerberos/agent:latest
### Override with environment variables
Next to attaching the configuration file, it is also possible to override the configuration with environment variables. This makes deployments easier when leveraging `docker compose` or `kubernetes` deployments much easier and scalable. Using this approach we simplify automation through `ansible` and `terraform`. You'll find [the full list of environment variables on the main README.md file](https://github.com/kerberos-io/agent#override-with-environment-variables).
Next to attaching the configuration file, it is also possible to override the configuration with environment variables. This makes deployments when leveraging `docker compose` or `kubernetes` much easier and more scalable. Using this approach we simplify automation through `ansible` and `terraform`. You'll find [the full list of environment variables on the main README.md file](https://github.com/kerberos-io/agent#override-with-environment-variables).
### 2. Running multiple containers with Docker compose

View File

@@ -1,35 +1,38 @@
version: "3.9"
x-common-variables: &common-variables
# Add variables here to add them to all agents
AGENT_HUB_KEY: "xxxxx" # The access key linked to your account in Kerberos Hub.
AGENT_HUB_PRIVATE_KEY: "xxxxx" # The secret access key linked to your account in Kerberos Hub.
# find full list of environment variables here: https://github.com/kerberos-io/agent#override-with-environment-variables
services:
kerberos-agent1:
image: "kerberos/agent:latest"
ports:
- "8081:80"
environment:
- AGENT_NAME=agent1
- AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://x.x.x.x:554/Streaming/Channels/101
- AGENT_HUB_KEY=xxx
- AGENT_HUB_PRIVATE_KEY=xxx
- AGENT_CAPTURE_CONTINUOUS=true
- AGENT_CAPTURE_PRERECORDING=10
- AGENT_CAPTURE_POSTRECORDING=10
- AGENT_CAPTURE_MAXLENGTH=60
- AGENT_CAPTURE_PIXEL_CHANGE=150
# find full list of environment variables here: https://github.com/kerberos-io/agent#override-with-environment-variables
<<: *common-variables
AGENT_NAME: agent1
AGENT_CAPTURE_IPCAMERA_RTSP: rtsp://username:password@x.x.x.x/Streaming/Channels/101 # Hikvision camera RTSP url example
AGENT_KEY: "1"
kerberos-agent2:
image: "kerberos/agent:latest"
ports:
- "8082:80"
environment:
- AGENT_NAME=agent2
- AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://x.x.x.x:554/Streaming/Channels/101
- AGENT_HUB_KEY=yyy
- AGENT_HUB_PRIVATE_KEY=yyy
<<: *common-variables
AGENT_NAME: agent2
AGENT_CAPTURE_IPCAMERA_RTSP: rtsp://username:password@x.x.x.x/channel1 # Linksys camera RTSP url example
AGENT_KEY: "2"
kerberos-agent3:
image: "kerberos/agent:latest"
ports:
- "8083:80"
environment:
- AGENT_NAME=agent3
- AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://x.x.x.x:554/Streaming/Channels/101
- AGENT_HUB_KEY=zzz
- AGENT_HUB_PRIVATE_KEY=zzz
<<: *common-variables
AGENT_NAME: agent3
AGENT_CAPTURE_IPCAMERA_RTSP: rtsp://username:password@x.x.x.x/cam/realmonitor?channel=1&subtype=1 # Dahua camera RTSP url example
AGENT_KEY: "3"
networks:
default:
name: cluster-net
external: true

View File

@@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: agent
image: kerberos/agent:latest
image: kerberos/agent:3.2.3
ports:
- containerPort: 80
protocol: TCP
@@ -50,4 +50,4 @@ spec:
- port: 80
targetPort: 80
selector:
app: agent
app: agent

BIN
machinery/.DS_Store vendored Normal file

Binary file not shown.

31
machinery/.env Normal file
View File

@@ -0,0 +1,31 @@
AGENT_NAME=camera-name
AGENT_KEY=uniq-camera-id
AGENT_TIMEZONE=Europe/Brussels
#AGENT_CAPTURE_CONTINUOUS=true
#AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://fake.kerberos.io/stream
#AGENT_CAPTURE_IPCAMERA_SUB_RTSP=rtsp://fake.kerberos.io/stream
AGENT_CAPTURE_IPCAMERA_ONVIF_XADDR=x.x.x.x
AGENT_CAPTURE_IPCAMERA_ONVIF_USERNAME=xxx
AGENT_CAPTURE_IPCAMERA_ONVIF_PASSWORD=xxx
AGENT_HUB_URI=https://api.cloud.kerberos.io
AGENT_HUB_KEY=AKIXxxx4JBEI
AGENT_HUB_PRIVATE_KEY=DIOXxxxAlYpaxxxxXioL0txxx
AGENT_HUB_SITE=681xxxxxxx9bcda5
# By default will send to Hub (=S3), if you wish to send to Kerberos Vault, set to "kstorage"
AGENT_CLOUD=s3
AGENT_KERBEROSVAULT_URI=
AGENT_KERBEROSVAULT_PROVIDER=
AGENT_KERBEROSVAULT_DIRECTORY=
AGENT_KERBEROSVAULT_ACCESS_KEY=
AGENT_KERBEROSVAULT_SECRET_KEY=
AGENT_KERBEROSVAULT_MAX_RETRIES=10
AGENT_KERBEROSVAULT_TIMEOUT=120
AGENT_KERBEROSVAULT_SECONDARY_URI=
AGENT_KERBEROSVAULT_SECONDARY_PROVIDER=
AGENT_KERBEROSVAULT_SECONDARY_DIRECTORY=
AGENT_KERBEROSVAULT_SECONDARY_ACCESS_KEY=
AGENT_KERBEROSVAULT_SECONDARY_SECRET_KEY=
# Open telemetry tracing endpoint
OTEL_EXPORTER_OTLP_ENDPOINT=

View File

@@ -1,18 +0,0 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Package",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "main.go",
"args": ["-action", "run"],
"envFile": "${workspaceFolder}/.env",
"buildFlags": "--tags dynamic",
},
]
}

View File

@@ -14,7 +14,9 @@
"ipcamera": {
"rtsp": "",
"sub_rtsp": "",
"fps": ""
"fps": "",
"base_width": 640,
"base_height": 0
},
"usbcamera": {
"device": ""
@@ -26,6 +28,7 @@
"recording": "true",
"snapshots": "true",
"liveview": "true",
"liveview_chunking": "false",
"motion": "true",
"postrecording": 20,
"prerecording": 10,
@@ -98,18 +101,25 @@
"region": "eu-west-1"
},
"kstorage": {},
"kstorage_secondary": {},
"dropbox": {},
"mqtturi": "tcp://mqtt.kerberos.io:1883",
"mqtt_username": "",
"mqtt_password": "",
"stunuri": "stun:turn.kerberos.io:8443",
"turn_force": "false",
"turnuri": "turn:turn.kerberos.io:8443",
"turn_username": "username1",
"turn_password": "password1",
"heartbeaturi": "",
"hub_encryption": "true",
"hub_uri": "https://api.cloud.kerberos.io",
"hub_key": "",
"hub_private_key": "",
"hub_site": "",
"condition_uri": ""
"condition_uri": "",
"encryption": {},
"signing": {},
"realtimeprocessing": "false",
"realtimeprocessing_topic": ""
}

View File

@@ -1,5 +1,4 @@
// Package docs GENERATED BY SWAG; DO NOT EDIT
// This file was generated by swaggo/swag
// Package docs Code generated by swaggo/swag. DO NOT EDIT
package docs
import "github.com/swaggo/swag"
@@ -29,7 +28,7 @@ const docTemplate = `{
"post": {
"description": "Will return the ONVIF capabilities for the specific camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Will return the ONVIF capabilities for the specific camera.",
"operationId": "camera-onvif-capabilities",
@@ -54,11 +53,74 @@ const docTemplate = `{
}
}
},
"/api/camera/onvif/gotopreset": {
"post": {
"description": "Will activate the desired ONVIF preset.",
"tags": [
"onvif"
],
"summary": "Will activate the desired ONVIF preset.",
"operationId": "camera-onvif-gotopreset",
"parameters": [
{
"description": "OnvifPreset",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifPreset"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/inputs": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will get the digital inputs from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will get the digital inputs from the ONVIF device.",
"operationId": "get-digital-inputs",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/login": {
"post": {
"description": "Try to login into ONVIF supported camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Try to login into ONVIF supported camera.",
"operationId": "camera-onvif-login",
@@ -83,11 +145,86 @@ const docTemplate = `{
}
}
},
"/api/camera/onvif/outputs": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will get the relay outputs from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will get the relay outputs from the ONVIF device.",
"operationId": "get-relay-outputs",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/outputs/{output}": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will trigger the relay output from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will trigger the relay output from the ONVIF device.",
"operationId": "trigger-relay-output",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
},
{
"type": "string",
"description": "Output",
"name": "output",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/pantilt": {
"post": {
"description": "Panning or/and tilting the camera using a direction (x,y).",
"tags": [
"camera"
"onvif"
],
"summary": "Panning or/and tilting the camera.",
"operationId": "camera-onvif-pantilt",
@@ -112,11 +249,74 @@ const docTemplate = `{
}
}
},
"/api/camera/onvif/presets": {
"post": {
"description": "Will return the ONVIF presets for the specific camera.",
"tags": [
"onvif"
],
"summary": "Will return the ONVIF presets for the specific camera.",
"operationId": "camera-onvif-presets",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/verify": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will verify the ONVIF connectivity.",
"tags": [
"onvif"
],
"summary": "Will verify the ONVIF connectivity.",
"operationId": "verify-onvif",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/zoom": {
"post": {
"description": "Zooming in or out the camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Zooming in or out the camera.",
"operationId": "camera-onvif-zoom",
@@ -141,6 +341,90 @@ const docTemplate = `{
}
}
},
"/api/camera/record": {
"post": {
"description": "Make a recording.",
"tags": [
"camera"
],
"summary": "Make a recording.",
"operationId": "camera-record",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/restart": {
"post": {
"description": "Restart the agent.",
"tags": [
"camera"
],
"summary": "Restart the agent.",
"operationId": "camera-restart",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/snapshot/base64": {
"get": {
"description": "Get a snapshot from the camera in base64.",
"tags": [
"camera"
],
"summary": "Get a snapshot from the camera in base64.",
"operationId": "snapshot-base64",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/camera/snapshot/jpeg": {
"get": {
"description": "Get a snapshot from the camera in jpeg format.",
"tags": [
"camera"
],
"summary": "Get a snapshot from the camera in jpeg format.",
"operationId": "snapshot-jpeg",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/camera/stop": {
"post": {
"description": "Stop the agent.",
"tags": [
"camera"
],
"summary": "Stop the agent.",
"operationId": "camera-stop",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/verify/{streamType}": {
"post": {
"description": "This method will validate a specific profile connection from an RTSP camera, and try to get the codec.",
@@ -181,6 +465,75 @@ const docTemplate = `{
}
}
},
"/api/config": {
"get": {
"description": "Get the current configuration.",
"tags": [
"config"
],
"summary": "Get the current configuration.",
"operationId": "config",
"responses": {
"200": {
"description": "OK"
}
}
},
"post": {
"description": "Update the current configuration.",
"tags": [
"config"
],
"summary": "Update the current configuration.",
"operationId": "config",
"parameters": [
{
"description": "Configuration",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.Config"
}
}
],
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/dashboard": {
"get": {
"description": "Get all information showed on the dashboard.",
"tags": [
"general"
],
"summary": "Get all information showed on the dashboard.",
"operationId": "dashboard",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/days": {
"get": {
"description": "Get all days stored in the recordings directory.",
"tags": [
"general"
],
"summary": "Get all days stored in the recordings directory.",
"operationId": "days",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/hub/verify": {
"post": {
"security": [
@@ -190,7 +543,7 @@ const docTemplate = `{
],
"description": "Will verify the hub connectivity.",
"tags": [
"config"
"persistence"
],
"summary": "Will verify the hub connectivity.",
"operationId": "verify-hub",
@@ -215,6 +568,32 @@ const docTemplate = `{
}
}
},
"/api/latest-events": {
"post": {
"description": "Get the latest recordings (events) from the recordings directory.",
"tags": [
"general"
],
"summary": "Get the latest recordings (events) from the recordings directory.",
"operationId": "latest-events",
"parameters": [
{
"description": "Event filter",
"name": "eventFilter",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.EventFilter"
}
}
],
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/login": {
"post": {
"description": "Get Authorization token.",
@@ -244,40 +623,6 @@ const docTemplate = `{
}
}
},
"/api/onvif/verify": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will verify the ONVIF connectivity.",
"tags": [
"config"
],
"summary": "Will verify the ONVIF connectivity.",
"operationId": "verify-onvif",
"parameters": [
{
"description": "Camera Config",
"name": "cameraConfig",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.IPCamera"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/persistence/verify": {
"post": {
"security": [
@@ -287,7 +632,7 @@ const docTemplate = `{
],
"description": "Will verify the persistence.",
"tags": [
"config"
"persistence"
],
"summary": "Will verify the persistence.",
"operationId": "verify-persistence",
@@ -317,8 +662,15 @@ const docTemplate = `{
"models.APIResponse": {
"type": "object",
"properties": {
"can_pan_tilt": {
"type": "boolean"
},
"can_zoom": {
"type": "boolean"
},
"data": {},
"message": {}
"message": {},
"ptz_functions": {}
}
},
"models.Authentication": {
@@ -440,6 +792,9 @@ const docTemplate = `{
"dropbox": {
"$ref": "#/definitions/models.Dropbox"
},
"encryption": {
"$ref": "#/definitions/models.Encryption"
},
"friendly_name": {
"type": "string"
},
@@ -447,6 +802,9 @@ const docTemplate = `{
"description": "obsolete",
"type": "string"
},
"hub_encryption": {
"type": "string"
},
"hub_key": {
"type": "string"
},
@@ -483,6 +841,12 @@ const docTemplate = `{
"offline": {
"type": "string"
},
"realtimeprocessing": {
"type": "string"
},
"realtimeprocessing_topic": {
"type": "string"
},
"region": {
"$ref": "#/definitions/models.Region"
},
@@ -507,6 +871,9 @@ const docTemplate = `{
"timezone": {
"type": "string"
},
"turn_force": {
"type": "string"
},
"turn_password": {
"type": "string"
},
@@ -543,12 +910,49 @@ const docTemplate = `{
}
}
},
"models.Encryption": {
"type": "object",
"properties": {
"enabled": {
"type": "string"
},
"fingerprint": {
"type": "string"
},
"private_key": {
"type": "string"
},
"recordings": {
"type": "string"
},
"symmetric_key": {
"type": "string"
}
}
},
"models.EventFilter": {
"type": "object",
"properties": {
"number_of_elements": {
"type": "integer"
},
"timestamp_offset_end": {
"type": "integer"
},
"timestamp_offset_start": {
"type": "integer"
}
}
},
"models.IPCamera": {
"type": "object",
"properties": {
"fps": {
"type": "string"
},
"height": {
"type": "integer"
},
"onvif": {
"type": "string"
},
@@ -564,8 +968,20 @@ const docTemplate = `{
"rtsp": {
"type": "string"
},
"sub_fps": {
"type": "string"
},
"sub_height": {
"type": "integer"
},
"sub_rtsp": {
"type": "string"
},
"sub_width": {
"type": "integer"
},
"width": {
"type": "integer"
}
}
},
@@ -621,6 +1037,17 @@ const docTemplate = `{
}
}
},
"models.OnvifPreset": {
"type": "object",
"properties": {
"onvif_credentials": {
"$ref": "#/definitions/models.OnvifCredentials"
},
"preset": {
"type": "string"
}
}
},
"models.OnvifZoom": {
"type": "object",
"properties": {
@@ -759,6 +1186,8 @@ var SwaggerInfo = &swag.Spec{
Description: "This is the API for using and configure Kerberos Agent.",
InfoInstanceName: "swagger",
SwaggerTemplate: docTemplate,
LeftDelim: "{{",
RightDelim: "}}",
}
func init() {

View File

@@ -21,7 +21,7 @@
"post": {
"description": "Will return the ONVIF capabilities for the specific camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Will return the ONVIF capabilities for the specific camera.",
"operationId": "camera-onvif-capabilities",
@@ -46,11 +46,74 @@
}
}
},
"/api/camera/onvif/gotopreset": {
"post": {
"description": "Will activate the desired ONVIF preset.",
"tags": [
"onvif"
],
"summary": "Will activate the desired ONVIF preset.",
"operationId": "camera-onvif-gotopreset",
"parameters": [
{
"description": "OnvifPreset",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifPreset"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/inputs": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will get the digital inputs from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will get the digital inputs from the ONVIF device.",
"operationId": "get-digital-inputs",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/login": {
"post": {
"description": "Try to login into ONVIF supported camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Try to login into ONVIF supported camera.",
"operationId": "camera-onvif-login",
@@ -75,11 +138,86 @@
}
}
},
"/api/camera/onvif/outputs": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will get the relay outputs from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will get the relay outputs from the ONVIF device.",
"operationId": "get-relay-outputs",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/outputs/{output}": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will trigger the relay output from the ONVIF device.",
"tags": [
"onvif"
],
"summary": "Will trigger the relay output from the ONVIF device.",
"operationId": "trigger-relay-output",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
},
{
"type": "string",
"description": "Output",
"name": "output",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/pantilt": {
"post": {
"description": "Panning or/and tilting the camera using a direction (x,y).",
"tags": [
"camera"
"onvif"
],
"summary": "Panning or/and tilting the camera.",
"operationId": "camera-onvif-pantilt",
@@ -104,11 +242,74 @@
}
}
},
"/api/camera/onvif/presets": {
"post": {
"description": "Will return the ONVIF presets for the specific camera.",
"tags": [
"onvif"
],
"summary": "Will return the ONVIF presets for the specific camera.",
"operationId": "camera-onvif-presets",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/verify": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will verify the ONVIF connectivity.",
"tags": [
"onvif"
],
"summary": "Will verify the ONVIF connectivity.",
"operationId": "verify-onvif",
"parameters": [
{
"description": "OnvifCredentials",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.OnvifCredentials"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/onvif/zoom": {
"post": {
"description": "Zooming in or out the camera.",
"tags": [
"camera"
"onvif"
],
"summary": "Zooming in or out the camera.",
"operationId": "camera-onvif-zoom",
@@ -133,6 +334,90 @@
}
}
},
"/api/camera/record": {
"post": {
"description": "Make a recording.",
"tags": [
"camera"
],
"summary": "Make a recording.",
"operationId": "camera-record",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/restart": {
"post": {
"description": "Restart the agent.",
"tags": [
"camera"
],
"summary": "Restart the agent.",
"operationId": "camera-restart",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/snapshot/base64": {
"get": {
"description": "Get a snapshot from the camera in base64.",
"tags": [
"camera"
],
"summary": "Get a snapshot from the camera in base64.",
"operationId": "snapshot-base64",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/camera/snapshot/jpeg": {
"get": {
"description": "Get a snapshot from the camera in jpeg format.",
"tags": [
"camera"
],
"summary": "Get a snapshot from the camera in jpeg format.",
"operationId": "snapshot-jpeg",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/camera/stop": {
"post": {
"description": "Stop the agent.",
"tags": [
"camera"
],
"summary": "Stop the agent.",
"operationId": "camera-stop",
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/camera/verify/{streamType}": {
"post": {
"description": "This method will validate a specific profile connection from an RTSP camera, and try to get the codec.",
@@ -173,6 +458,75 @@
}
}
},
"/api/config": {
"get": {
"description": "Get the current configuration.",
"tags": [
"config"
],
"summary": "Get the current configuration.",
"operationId": "config",
"responses": {
"200": {
"description": "OK"
}
}
},
"post": {
"description": "Update the current configuration.",
"tags": [
"config"
],
"summary": "Update the current configuration.",
"operationId": "config",
"parameters": [
{
"description": "Configuration",
"name": "config",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.Config"
}
}
],
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/dashboard": {
"get": {
"description": "Get all information showed on the dashboard.",
"tags": [
"general"
],
"summary": "Get all information showed on the dashboard.",
"operationId": "dashboard",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/days": {
"get": {
"description": "Get all days stored in the recordings directory.",
"tags": [
"general"
],
"summary": "Get all days stored in the recordings directory.",
"operationId": "days",
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/hub/verify": {
"post": {
"security": [
@@ -182,7 +536,7 @@
],
"description": "Will verify the hub connectivity.",
"tags": [
"config"
"persistence"
],
"summary": "Will verify the hub connectivity.",
"operationId": "verify-hub",
@@ -207,6 +561,32 @@
}
}
},
"/api/latest-events": {
"post": {
"description": "Get the latest recordings (events) from the recordings directory.",
"tags": [
"general"
],
"summary": "Get the latest recordings (events) from the recordings directory.",
"operationId": "latest-events",
"parameters": [
{
"description": "Event filter",
"name": "eventFilter",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.EventFilter"
}
}
],
"responses": {
"200": {
"description": "OK"
}
}
}
},
"/api/login": {
"post": {
"description": "Get Authorization token.",
@@ -236,40 +616,6 @@
}
}
},
"/api/onvif/verify": {
"post": {
"security": [
{
"Bearer": []
}
],
"description": "Will verify the ONVIF connectivity.",
"tags": [
"config"
],
"summary": "Will verify the ONVIF connectivity.",
"operationId": "verify-onvif",
"parameters": [
{
"description": "Camera Config",
"name": "cameraConfig",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/models.IPCamera"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/models.APIResponse"
}
}
}
}
},
"/api/persistence/verify": {
"post": {
"security": [
@@ -279,7 +625,7 @@
],
"description": "Will verify the persistence.",
"tags": [
"config"
"persistence"
],
"summary": "Will verify the persistence.",
"operationId": "verify-persistence",
@@ -309,8 +655,15 @@
"models.APIResponse": {
"type": "object",
"properties": {
"can_pan_tilt": {
"type": "boolean"
},
"can_zoom": {
"type": "boolean"
},
"data": {},
"message": {}
"message": {},
"ptz_functions": {}
}
},
"models.Authentication": {
@@ -432,6 +785,9 @@
"dropbox": {
"$ref": "#/definitions/models.Dropbox"
},
"encryption": {
"$ref": "#/definitions/models.Encryption"
},
"friendly_name": {
"type": "string"
},
@@ -439,6 +795,9 @@
"description": "obsolete",
"type": "string"
},
"hub_encryption": {
"type": "string"
},
"hub_key": {
"type": "string"
},
@@ -475,6 +834,12 @@
"offline": {
"type": "string"
},
"realtimeprocessing": {
"type": "string"
},
"realtimeprocessing_topic": {
"type": "string"
},
"region": {
"$ref": "#/definitions/models.Region"
},
@@ -499,6 +864,9 @@
"timezone": {
"type": "string"
},
"turn_force": {
"type": "string"
},
"turn_password": {
"type": "string"
},
@@ -535,12 +903,49 @@
}
}
},
"models.Encryption": {
"type": "object",
"properties": {
"enabled": {
"type": "string"
},
"fingerprint": {
"type": "string"
},
"private_key": {
"type": "string"
},
"recordings": {
"type": "string"
},
"symmetric_key": {
"type": "string"
}
}
},
"models.EventFilter": {
"type": "object",
"properties": {
"number_of_elements": {
"type": "integer"
},
"timestamp_offset_end": {
"type": "integer"
},
"timestamp_offset_start": {
"type": "integer"
}
}
},
"models.IPCamera": {
"type": "object",
"properties": {
"fps": {
"type": "string"
},
"height": {
"type": "integer"
},
"onvif": {
"type": "string"
},
@@ -556,8 +961,20 @@
"rtsp": {
"type": "string"
},
"sub_fps": {
"type": "string"
},
"sub_height": {
"type": "integer"
},
"sub_rtsp": {
"type": "string"
},
"sub_width": {
"type": "integer"
},
"width": {
"type": "integer"
}
}
},
@@ -613,6 +1030,17 @@
}
}
},
"models.OnvifPreset": {
"type": "object",
"properties": {
"onvif_credentials": {
"$ref": "#/definitions/models.OnvifCredentials"
},
"preset": {
"type": "string"
}
}
},
"models.OnvifZoom": {
"type": "object",
"properties": {

View File

@@ -2,8 +2,13 @@ basePath: /
definitions:
models.APIResponse:
properties:
can_pan_tilt:
type: boolean
can_zoom:
type: boolean
data: {}
message: {}
ptz_functions: {}
type: object
models.Authentication:
properties:
@@ -83,11 +88,15 @@ definitions:
type: string
dropbox:
$ref: '#/definitions/models.Dropbox'
encryption:
$ref: '#/definitions/models.Encryption'
friendly_name:
type: string
heartbeaturi:
description: obsolete
type: string
hub_encryption:
type: string
hub_key:
type: string
hub_private_key:
@@ -112,6 +121,10 @@ definitions:
type: string
offline:
type: string
realtimeprocessing:
type: string
realtimeprocessing_topic:
type: string
region:
$ref: '#/definitions/models.Region'
remove_after_upload:
@@ -128,6 +141,8 @@ definitions:
type: array
timezone:
type: string
turn_force:
type: string
turn_password:
type: string
turn_username:
@@ -151,10 +166,34 @@ definitions:
directory:
type: string
type: object
models.Encryption:
properties:
enabled:
type: string
fingerprint:
type: string
private_key:
type: string
recordings:
type: string
symmetric_key:
type: string
type: object
models.EventFilter:
properties:
number_of_elements:
type: integer
timestamp_offset_end:
type: integer
timestamp_offset_start:
type: integer
type: object
models.IPCamera:
properties:
fps:
type: string
height:
type: integer
onvif:
type: string
onvif_password:
@@ -165,8 +204,16 @@ definitions:
type: string
rtsp:
type: string
sub_fps:
type: string
sub_height:
type: integer
sub_rtsp:
type: string
sub_width:
type: integer
width:
type: integer
type: object
models.KStorage:
properties:
@@ -202,6 +249,13 @@ definitions:
tilt:
type: number
type: object
models.OnvifPreset:
properties:
onvif_credentials:
$ref: '#/definitions/models.OnvifCredentials'
preset:
type: string
type: object
models.OnvifZoom:
properties:
onvif_credentials:
@@ -309,7 +363,47 @@ paths:
$ref: '#/definitions/models.APIResponse'
summary: Will return the ONVIF capabilities for the specific camera.
tags:
- camera
- onvif
/api/camera/onvif/gotopreset:
post:
description: Will activate the desired ONVIF preset.
operationId: camera-onvif-gotopreset
parameters:
- description: OnvifPreset
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifPreset'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
summary: Will activate the desired ONVIF preset.
tags:
- onvif
/api/camera/onvif/inputs:
post:
description: Will get the digital inputs from the ONVIF device.
operationId: get-digital-inputs
parameters:
- description: OnvifCredentials
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifCredentials'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
security:
- Bearer: []
summary: Will get the digital inputs from the ONVIF device.
tags:
- onvif
/api/camera/onvif/login:
post:
description: Try to login into ONVIF supported camera.
@@ -328,7 +422,54 @@ paths:
$ref: '#/definitions/models.APIResponse'
summary: Try to login into ONVIF supported camera.
tags:
- camera
- onvif
/api/camera/onvif/outputs:
post:
description: Will get the relay outputs from the ONVIF device.
operationId: get-relay-outputs
parameters:
- description: OnvifCredentials
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifCredentials'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
security:
- Bearer: []
summary: Will get the relay outputs from the ONVIF device.
tags:
- onvif
/api/camera/onvif/outputs/{output}:
post:
description: Will trigger the relay output from the ONVIF device.
operationId: trigger-relay-output
parameters:
- description: OnvifCredentials
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifCredentials'
- description: Output
in: path
name: output
required: true
type: string
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
security:
- Bearer: []
summary: Will trigger the relay output from the ONVIF device.
tags:
- onvif
/api/camera/onvif/pantilt:
post:
description: Panning or/and tilting the camera using a direction (x,y).
@@ -347,7 +488,47 @@ paths:
$ref: '#/definitions/models.APIResponse'
summary: Panning or/and tilting the camera.
tags:
- camera
- onvif
/api/camera/onvif/presets:
post:
description: Will return the ONVIF presets for the specific camera.
operationId: camera-onvif-presets
parameters:
- description: OnvifCredentials
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifCredentials'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
summary: Will return the ONVIF presets for the specific camera.
tags:
- onvif
/api/camera/onvif/verify:
post:
description: Will verify the ONVIF connectivity.
operationId: verify-onvif
parameters:
- description: OnvifCredentials
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.OnvifCredentials'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
security:
- Bearer: []
summary: Will verify the ONVIF connectivity.
tags:
- onvif
/api/camera/onvif/zoom:
post:
description: Zooming in or out the camera.
@@ -366,6 +547,62 @@ paths:
$ref: '#/definitions/models.APIResponse'
summary: Zooming in or out the camera.
tags:
- onvif
/api/camera/record:
post:
description: Make a recording.
operationId: camera-record
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
summary: Make a recording.
tags:
- camera
/api/camera/restart:
post:
description: Restart the agent.
operationId: camera-restart
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
summary: Restart the agent.
tags:
- camera
/api/camera/snapshot/base64:
get:
description: Get a snapshot from the camera in base64.
operationId: snapshot-base64
responses:
"200":
description: OK
summary: Get a snapshot from the camera in base64.
tags:
- camera
/api/camera/snapshot/jpeg:
get:
description: Get a snapshot from the camera in jpeg format.
operationId: snapshot-jpeg
responses:
"200":
description: OK
summary: Get a snapshot from the camera in jpeg format.
tags:
- camera
/api/camera/stop:
post:
description: Stop the agent.
operationId: camera-stop
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
summary: Stop the agent.
tags:
- camera
/api/camera/verify/{streamType}:
post:
@@ -395,6 +632,52 @@ paths:
summary: Validate a specific RTSP profile camera connection.
tags:
- camera
/api/config:
get:
description: Get the current configuration.
operationId: config
responses:
"200":
description: OK
summary: Get the current configuration.
tags:
- config
post:
description: Update the current configuration.
operationId: config
parameters:
- description: Configuration
in: body
name: config
required: true
schema:
$ref: '#/definitions/models.Config'
responses:
"200":
description: OK
summary: Update the current configuration.
tags:
- config
/api/dashboard:
get:
description: Get all information showed on the dashboard.
operationId: dashboard
responses:
"200":
description: OK
summary: Get all information showed on the dashboard.
tags:
- general
/api/days:
get:
description: Get all days stored in the recordings directory.
operationId: days
responses:
"200":
description: OK
summary: Get all days stored in the recordings directory.
tags:
- general
/api/hub/verify:
post:
description: Will verify the hub connectivity.
@@ -415,7 +698,24 @@ paths:
- Bearer: []
summary: Will verify the hub connectivity.
tags:
- config
- persistence
/api/latest-events:
post:
description: Get the latest recordings (events) from the recordings directory.
operationId: latest-events
parameters:
- description: Event filter
in: body
name: eventFilter
required: true
schema:
$ref: '#/definitions/models.EventFilter'
responses:
"200":
description: OK
summary: Get the latest recordings (events) from the recordings directory.
tags:
- general
/api/login:
post:
description: Get Authorization token.
@@ -435,27 +735,6 @@ paths:
summary: Get Authorization token.
tags:
- authentication
/api/onvif/verify:
post:
description: Will verify the ONVIF connectivity.
operationId: verify-onvif
parameters:
- description: Camera Config
in: body
name: cameraConfig
required: true
schema:
$ref: '#/definitions/models.IPCamera'
responses:
"200":
description: OK
schema:
$ref: '#/definitions/models.APIResponse'
security:
- Bearer: []
summary: Will verify the ONVIF connectivity.
tags:
- config
/api/persistence/verify:
post:
description: Will verify the persistence.
@@ -476,7 +755,7 @@ paths:
- Bearer: []
summary: Will verify the persistence.
tags:
- config
- persistence
securityDefinitions:
Bearer:
in: header

View File

@@ -1,145 +1,140 @@
module github.com/kerberos-io/agent/machinery
go 1.19
go 1.24.2
// replace github.com/kerberos-io/joy4 v1.0.57 => ../../../../github.com/kerberos-io/joy4
// replace github.com/kerberos-io/onvif v0.0.5 => ../../../../github.com/kerberos-io/onvif
replace google.golang.org/genproto => google.golang.org/genproto v0.0.0-20250519155744-55703ea1f237
require (
github.com/Eyevinn/mp4ff v0.48.0
github.com/InVisionApp/conjungo v1.1.0
github.com/appleboy/gin-jwt/v2 v2.9.1
github.com/asticode/go-astits v1.11.0
github.com/bluenviron/gortsplib/v3 v3.6.1
github.com/bluenviron/mediacommon v0.5.0
github.com/appleboy/gin-jwt/v2 v2.10.3
github.com/bluenviron/gortsplib/v4 v4.14.1
github.com/bluenviron/mediacommon v1.14.0
github.com/cedricve/go-onvif v0.0.0-20200222191200-567e8ce298f6
github.com/deepch/vdk v0.0.19
github.com/dromara/carbon/v2 v2.6.8
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/eclipse/paho.mqtt.golang v1.4.2
github.com/elastic/go-sysinfo v1.9.0
github.com/gin-contrib/cors v1.4.0
github.com/gin-contrib/pprof v1.4.0
github.com/gin-gonic/contrib v0.0.0-20221130124618-7e01895a63f2
github.com/gin-gonic/gin v1.8.2
github.com/golang-jwt/jwt/v4 v4.4.3
github.com/golang-module/carbon/v2 v2.2.3
github.com/gorilla/websocket v1.5.0
github.com/eclipse/paho.mqtt.golang v1.5.0
github.com/elastic/go-sysinfo v1.15.3
github.com/gin-contrib/cors v1.7.5
github.com/gin-contrib/pprof v1.5.3
github.com/gin-gonic/contrib v0.0.0-20250521004450-2b1292699c15
github.com/gin-gonic/gin v1.10.1
github.com/gofrs/uuid v4.4.0+incompatible
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/gorilla/websocket v1.5.3
github.com/kellydunn/golang-geo v0.7.0
github.com/kerberos-io/joy4 v1.0.58
github.com/kerberos-io/onvif v0.0.6
github.com/kerberos-io/joy4 v1.0.64
github.com/kerberos-io/onvif v1.0.0
github.com/minio/minio-go/v6 v6.0.57
github.com/nsmith5/mjpeg v0.0.0-20200913181537-54b8ada0e53e
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/pion/rtp v1.7.13
github.com/pion/webrtc/v3 v3.1.50
github.com/sirupsen/logrus v1.9.0
github.com/swaggo/files v1.0.0
github.com/swaggo/gin-swagger v1.5.3
github.com/swaggo/swag v1.8.9
github.com/pion/interceptor v0.1.40
github.com/pion/rtp v1.8.19
github.com/pion/webrtc/v4 v4.1.2
github.com/sirupsen/logrus v1.9.3
github.com/swaggo/files v1.0.1
github.com/swaggo/gin-swagger v1.6.0
github.com/swaggo/swag v1.16.4
github.com/tevino/abool v1.2.0
go.mongodb.org/mongo-driver v1.7.5
gopkg.in/DataDog/dd-trace-go.v1 v1.46.0
gopkg.in/natefinch/lumberjack.v2 v2.0.0
github.com/zaf/g711 v1.4.0
go.mongodb.org/mongo-driver v1.17.3
go.opentelemetry.io/otel v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0
go.opentelemetry.io/otel/sdk v1.36.0
go.opentelemetry.io/otel/trace v1.36.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
github.com/DataDog/datadog-agent/pkg/obfuscate v0.0.0-20211129110424-6491aa3bf583 // indirect
github.com/DataDog/datadog-agent/pkg/remoteconfig/state v0.42.0-rc.1 // indirect
github.com/DataDog/datadog-go v4.8.2+incompatible // indirect
github.com/DataDog/datadog-go/v5 v5.0.2 // indirect
github.com/DataDog/go-tuf v0.3.0--fix-localmeta-fork // indirect
github.com/DataDog/gostackparse v0.5.0 // indirect
github.com/DataDog/sketches-go v1.2.1 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/asticode/go-astikit v0.30.0 // indirect
github.com/beevik/etree v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/beevik/etree v1.2.0 // indirect
github.com/bluenviron/mediacommon/v2 v2.2.0 // indirect
github.com/bytedance/sonic v1.13.2 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/clbanning/mxj v1.8.4 // indirect
github.com/dgraph-io/ristretto v0.1.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/elastic/go-windows v1.0.0 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/elastic/go-windows v1.0.2 // indirect
github.com/elgs/gostrgen v0.0.0-20161222160715-9d61ae07eeae // indirect
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/gin-contrib/sse v1.0.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/spec v0.20.4 // indirect
github.com/go-openapi/swag v0.19.15 // indirect
github.com/go-playground/locales v0.14.0 // indirect
github.com/go-playground/universal-translator v0.18.0 // indirect
github.com/go-playground/validator/v10 v10.11.1 // indirect
github.com/go-stack/stack v1.8.0 // indirect
github.com/goccy/go-json v0.10.0 // indirect
github.com/gofrs/uuid v3.2.0+incompatible // indirect
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/pprof v0.0.0-20210423192551-a2663126120b // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/icholy/digest v0.1.23 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.0 // indirect
github.com/juju/errors v1.0.0 // indirect
github.com/klauspost/compress v1.16.7 // indirect
github.com/klauspost/cpuid v1.2.3 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/kylelemons/go-gypsy v1.0.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/lib/pq v1.10.7 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/minio/md5-simd v1.1.0 // indirect
github.com/minio/sha256-simd v0.1.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/onsi/gomega v1.27.4 // indirect
github.com/pelletier/go-toml/v2 v2.0.6 // indirect
github.com/philhofer/fwd v1.1.1 // indirect
github.com/pion/datachannel v1.5.5 // indirect
github.com/pion/dtls/v2 v2.1.5 // indirect
github.com/pion/ice/v2 v2.2.12 // indirect
github.com/pion/interceptor v0.1.11 // indirect
github.com/pion/logging v0.2.2 // indirect
github.com/pion/mdns v0.0.5 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/nxadm/tail v1.4.11 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v3 v3.0.6 // indirect
github.com/pion/ice/v4 v4.0.10 // indirect
github.com/pion/logging v0.2.3 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.10 // indirect
github.com/pion/sctp v1.8.5 // indirect
github.com/pion/sdp/v3 v3.0.6 // indirect
github.com/pion/srtp/v2 v2.0.10 // indirect
github.com/pion/stun v0.3.5 // indirect
github.com/pion/transport v0.14.1 // indirect
github.com/pion/turn/v2 v2.0.8 // indirect
github.com/pion/udp v0.1.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/richardartoul/molecule v1.0.1-0.20221107223329-32cfee06a052 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/tinylib/msgp v1.1.6 // indirect
github.com/ugorji/go/codec v1.2.7 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/sdp/v3 v3.0.13 // indirect
github.com/pion/srtp/v3 v3.0.5 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.0.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.0.2 // indirect
github.com/xdg-go/stringprep v1.0.2 // indirect
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
github.com/ziutek/mymysql v1.5.4 // indirect
go4.org/intern v0.0.0-20211027215823-ae77deb06f29 // indirect
go4.org/unsafe/assume-no-moving-gc v0.0.0-20220617031537-928513b29760 // indirect
golang.org/x/crypto v0.4.0 // indirect
golang.org/x/net v0.9.0 // indirect
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect
golang.org/x/tools v0.7.0 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/appengine v1.6.6 // indirect
google.golang.org/grpc v1.32.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.6.0 // indirect
golang.org/x/arch v0.16.0 // indirect
golang.org/x/crypto v0.38.0 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/tools v0.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/ini.v1 v1.42.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
howett.net/plist v0.0.0-20181124034731-591f970eefbb // indirect
inet.af/netaddr v0.0.0-20220617031823-097006376321 // indirect
)

File diff suppressed because it is too large Load Diff

View File

@@ -3,51 +3,70 @@ package main
import (
"context"
"flag"
"fmt"
"os"
"time"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/components"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/onvif"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
configService "github.com/kerberos-io/agent/machinery/src/config"
"github.com/kerberos-io/agent/machinery/src/routers"
"github.com/kerberos-io/agent/machinery/src/utils"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
"gopkg.in/DataDog/dd-trace-go.v1/profiler"
)
var VERSION = "3.0.0"
var VERSION = utils.VERSION
func startTracing(agentKey string, otelEndpoint string) (*trace.TracerProvider, error) {
serviceName := "agent-" + agentKey
headers := map[string]string{
"content-type": "application/json",
}
exporter, err := otlptrace.New(
context.Background(),
otlptracehttp.NewClient(
otlptracehttp.WithEndpoint(otelEndpoint),
otlptracehttp.WithHeaders(headers),
otlptracehttp.WithInsecure(),
),
)
if err != nil {
return nil, fmt.Errorf("creating new exporter: %w", err)
}
tracerprovider := trace.NewTracerProvider(
trace.WithBatcher(
exporter,
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
trace.WithBatchTimeout(trace.DefaultScheduleDelay*time.Millisecond),
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
),
trace.WithResource(
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
attribute.String("environment", "develop"),
),
),
)
otel.SetTracerProvider(tracerprovider)
return tracerprovider, nil
}
func main() {
// You might be interested in debugging the agent.
if os.Getenv("DATADOG_AGENT_ENABLED") == "true" {
if os.Getenv("DATADOG_AGENT_K8S_ENABLED") == "true" {
tracer.Start()
defer tracer.Stop()
} else {
service := os.Getenv("DATADOG_AGENT_SERVICE")
environment := os.Getenv("DATADOG_AGENT_ENVIRONMENT")
log.Log.Info("Starting Datadog Agent with service: " + service + " and environment: " + environment)
rules := []tracer.SamplingRule{tracer.RateRule(1)}
tracer.Start(
tracer.WithSamplingRules(rules),
tracer.WithService(service),
tracer.WithEnv(environment),
)
defer tracer.Stop()
err := profiler.Start(
profiler.WithService(service),
profiler.WithEnv(environment),
profiler.WithProfileTypes(
profiler.CPUProfile,
profiler.HeapProfile,
),
)
if err != nil {
log.Log.Fatal(err.Error())
}
defer profiler.Stop()
}
}
// Start the show ;)
// We'll parse the flags (named variables), and start the agent.
@@ -65,20 +84,56 @@ func main() {
flag.StringVar(&timeout, "timeout", "2000", "Number of milliseconds to wait for the ONVIF discovery to complete")
flag.Parse()
// Specify the level of loggin: "info", "warning", "debug", "error" or "fatal."
logLevel := os.Getenv("LOG_LEVEL")
if logLevel == "" {
logLevel = "info"
}
// Specify the output formatter of the log: "text" or "json".
logOutput := os.Getenv("LOG_OUTPUT")
if logOutput == "" {
logOutput = "text"
}
// Specify the timezone of the log: "UTC" or "Local".
timezone, _ := time.LoadLocation("CET")
log.Log.Init(configDirectory, timezone)
log.Log.Init(logLevel, logOutput, configDirectory, timezone)
switch action {
case "version":
log.Log.Info("You are currrently running Kerberos Agent " + VERSION)
{
log.Log.Info("main.Main(): You are currrently running Kerberos Agent " + VERSION)
}
case "discover":
log.Log.Info(timeout)
{
// Convert duration to int
timeout, err := time.ParseDuration(timeout + "ms")
if err != nil {
log.Log.Fatal("main.Main(): could not parse timeout: " + err.Error())
return
}
onvif.Discover(timeout)
}
case "decrypt":
{
log.Log.Info("main.Main(): Decrypting: " + flag.Arg(0) + " with key: " + flag.Arg(1))
symmetricKey := []byte(flag.Arg(1))
if len(symmetricKey) == 0 {
log.Log.Fatal("main.Main(): symmetric key should not be empty")
return
}
if len(symmetricKey) != 32 {
log.Log.Fatal("main.Main(): symmetric key should be 32 bytes")
return
}
utils.Decrypt(flag.Arg(0), symmetricKey)
}
case "run":
{
// Print Kerberos.io ASCII art
// Print Agent ASCII art
utils.PrintASCIIArt()
// Print the environment variables which include "AGENT_" as prefix.
@@ -91,11 +146,28 @@ func main() {
configuration.Name = name
configuration.Port = port
// Open this configuration either from Kerberos Agent or Kerberos Factory.
components.OpenConfig(configDirectory, &configuration)
// Open this configuration either from Agent or Factory.
configService.OpenConfig(configDirectory, &configuration)
// We will override the configuration with the environment variables
components.OverrideWithEnvironmentVariables(&configuration)
configService.OverrideWithEnvironmentVariables(&configuration)
// Start OpenTelemetry tracing
if otelEndpoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); otelEndpoint == "" {
log.Log.Info("main.Main(): No OpenTelemetry endpoint provided, skipping tracing")
} else {
log.Log.Info("main.Main(): Starting OpenTelemetry tracing with endpoint: " + otelEndpoint)
agentKey := configuration.Config.Key
traceProvider, err := startTracing(agentKey, otelEndpoint)
if err != nil {
log.Log.Error("traceprovider: " + err.Error())
}
defer func() {
if err := traceProvider.Shutdown(context.Background()); err != nil {
log.Log.Error("traceprovider: " + err.Error())
}
}()
}
// Printing final configuration
utils.PrintConfiguration(&configuration)
@@ -106,18 +178,18 @@ func main() {
// Set timezone
timezone, _ := time.LoadLocation(configuration.Config.Timezone)
log.Log.Init(configDirectory, timezone)
log.Log.Init(logLevel, logOutput, configDirectory, timezone)
// Check if we have a device Key or not, if not
// we will generate one.
if configuration.Config.Key == "" {
key := utils.RandStringBytesMaskImpr(30)
configuration.Config.Key = key
err := components.StoreConfig(configDirectory, configuration.Config)
err := configService.StoreConfig(configDirectory, configuration.Config)
if err == nil {
log.Log.Info("Main: updated unique key for agent to: " + key)
log.Log.Info("main.Main(): updated unique key for agent to: " + key)
} else {
log.Log.Info("Main: something went wrong while trying to store key: " + key)
log.Log.Info("main.Main(): something went wrong while trying to store key: " + key)
}
}
@@ -125,18 +197,28 @@ func main() {
// This is used to restart the agent when the configuration is updated.
ctx, cancel := context.WithCancel(context.Background())
// We create a capture object, this will contain all the streaming clients.
// And allow us to extract media from within difference places in the agent.
capture := capture.Capture{
RTSPClient: nil,
RTSPSubClient: nil,
}
// Bootstrapping the agent
communication := models.Communication{
Context: &ctx,
CancelContext: &cancel,
HandleBootstrap: make(chan string, 1),
}
go components.Bootstrap(configDirectory, &configuration, &communication)
go components.Bootstrap(ctx, configDirectory, &configuration, &communication, &capture)
// Start the REST API.
routers.StartWebserver(configDirectory, &configuration, &communication)
routers.StartWebserver(configDirectory, &configuration, &communication, &capture)
}
default:
log.Log.Error("Main: Sorry I don't understand :(")
{
log.Log.Error("main.Main(): Sorry I don't understand :(")
}
}
}

View File

@@ -1 +0,0 @@
package api

View File

@@ -1,150 +0,0 @@
package capture
import (
"context"
"strconv"
"sync"
"time"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/joy4/av/pubsub"
"github.com/kerberos-io/joy4/av"
"github.com/kerberos-io/joy4/av/avutil"
"github.com/kerberos-io/joy4/cgo/ffmpeg"
"github.com/kerberos-io/joy4/format"
)
func OpenRTSP(ctx context.Context, url string) (av.DemuxCloser, []av.CodecData, error) {
format.RegisterAll()
infile, err := avutil.Open(ctx, url)
if err == nil {
streams, errstreams := infile.Streams()
return infile, streams, errstreams
}
return nil, []av.CodecData{}, err
}
func GetVideoStream(streams []av.CodecData) (av.CodecData, error) {
var videoStream av.CodecData
for _, stream := range streams {
if stream.Type().IsAudio() {
//astream := stream.(av.AudioCodecData)
} else if stream.Type().IsVideo() {
videoStream = stream
}
}
return videoStream, nil
}
func GetVideoDecoder(decoder *ffmpeg.VideoDecoder, streams []av.CodecData) {
// Load video codec
var vstream av.VideoCodecData
for _, stream := range streams {
if stream.Type().IsAudio() {
//astream := stream.(av.AudioCodecData)
} else if stream.Type().IsVideo() {
vstream = stream.(av.VideoCodecData)
}
}
err := ffmpeg.NewVideoDecoder(decoder, vstream)
if err != nil {
log.Log.Error("GetVideoDecoder: " + err.Error())
}
}
func DecodeImage(frame *ffmpeg.VideoFrame, pkt av.Packet, decoder *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) (*ffmpeg.VideoFrame, error) {
decoderMutex.Lock()
img, err := decoder.Decode(frame, pkt.Data)
decoderMutex.Unlock()
return img, err
}
func HandleStream(infile av.DemuxCloser, queue *pubsub.Queue, communication *models.Communication) { //, wg *sync.WaitGroup) {
log.Log.Debug("HandleStream: started")
var err error
loop:
for {
// This will check if we need to stop the thread,
// because of a reconfiguration.
select {
case <-communication.HandleStream:
break loop
default:
}
var pkt av.Packet
if pkt, err = infile.ReadPacket(); err != nil { // sometimes this throws an end of file..
log.Log.Error("HandleStream: " + err.Error())
time.Sleep(1 * time.Second)
}
// Could be that a decode is throwing errors.
if len(pkt.Data) > 0 {
queue.WritePacket(pkt)
// This will check if we need to stop the thread,
// because of a reconfiguration.
select {
case <-communication.HandleStream:
break loop
default:
}
if pkt.IsKeyFrame {
// Increment packets, so we know the device
// is not blocking.
r := communication.PackageCounter.Load().(int64)
log.Log.Info("HandleStream: packet size " + strconv.Itoa(len(pkt.Data)))
communication.PackageCounter.Store((r + 1) % 1000)
communication.LastPacketTimer.Store(time.Now().Unix())
}
}
}
queue.Close()
log.Log.Debug("HandleStream: finished")
}
func HandleSubStream(infile av.DemuxCloser, queue *pubsub.Queue, communication *models.Communication) { //, wg *sync.WaitGroup) {
log.Log.Debug("HandleSubStream: started")
var err error
loop:
for {
// This will check if we need to stop the thread,
// because of a reconfiguration.
select {
case <-communication.HandleSubStream:
break loop
default:
}
var pkt av.Packet
if pkt, err = infile.ReadPacket(); err != nil { // sometimes this throws an end of file..
log.Log.Error("HandleSubStream: " + err.Error())
time.Sleep(1 * time.Second)
}
// Could be that a decode is throwing errors.
if len(pkt.Data) > 0 {
queue.WritePacket(pkt)
// This will check if we need to stop the thread,
// because of a reconfiguration.
select {
case <-communication.HandleSubStream:
break loop
default:
}
}
}
queue.Close()
log.Log.Debug("HandleSubStream: finished")
}

View File

@@ -0,0 +1,72 @@
package capture
import (
"context"
"image"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/packets"
)
type Capture struct {
RTSPClient *Golibrtsp
RTSPSubClient *Golibrtsp
RTSPBackChannelClient *Golibrtsp
}
func (c *Capture) SetMainClient(rtspUrl string) *Golibrtsp {
c.RTSPClient = &Golibrtsp{
Url: rtspUrl,
}
return c.RTSPClient
}
func (c *Capture) SetSubClient(rtspUrl string) *Golibrtsp {
c.RTSPSubClient = &Golibrtsp{
Url: rtspUrl,
}
return c.RTSPSubClient
}
func (c *Capture) SetBackChannelClient(rtspUrl string) *Golibrtsp {
c.RTSPBackChannelClient = &Golibrtsp{
Url: rtspUrl,
}
return c.RTSPBackChannelClient
}
// RTSPClient is a interface that abstracts the RTSP client implementation.
type RTSPClient interface {
// Connect to the RTSP server.
Connect(ctx context.Context, otelContext context.Context) error
// Connect to a backchannel RTSP server.
ConnectBackChannel(ctx context.Context, otelContext context.Context) error
// Start the RTSP client, and start reading packets.
Start(ctx context.Context, streamType string, queue *packets.Queue, configuration *models.Configuration, communication *models.Communication) error
// Start the RTSP client, and start reading packets.
StartBackChannel(ctx context.Context, otelContext context.Context) error
// Decode a packet into a image.
DecodePacket(pkt packets.Packet) (image.YCbCr, error)
// Decode a packet into a image.
DecodePacketRaw(pkt packets.Packet) (image.Gray, error)
// Write a packet to the RTSP server.
WritePacket(pkt packets.Packet) error
// Close the connection to the RTSP server.
Close(ctx context.Context) error
// Get a list of streams from the RTSP server.
GetStreams() ([]packets.Stream, error)
// Get a list of video streams from the RTSP server.
GetVideoStreams() ([]packets.Stream, error)
// Get a list of audio streams from the RTSP server.
GetAudioStreams() ([]packets.Stream, error)
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,18 +3,21 @@ package capture
import (
"context"
"encoding/base64"
"image"
"os"
"strconv"
"time"
"github.com/gin-gonic/gin"
"github.com/kerberos-io/agent/machinery/src/conditions"
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/kerberos-io/agent/machinery/src/utils"
"github.com/kerberos-io/joy4/av/pubsub"
"github.com/kerberos-io/joy4/format/mp4"
"github.com/kerberos-io/joy4/av"
"github.com/kerberos-io/agent/machinery/src/video"
"go.opentelemetry.io/otel/trace"
)
func CleanupRecordingDirectory(configDirectory string, configuration *models.Configuration) {
@@ -51,48 +54,76 @@ func CleanupRecordingDirectory(configDirectory string, configuration *models.Con
}
}
func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configuration *models.Configuration, communication *models.Communication, streams []av.CodecData) {
func HandleRecordStream(queue *packets.Queue, configDirectory string, configuration *models.Configuration, communication *models.Communication, rtspClient RTSPClient) {
config := configuration.Config
loc, _ := time.LoadLocation(config.Timezone)
if config.Capture.Recording == "false" {
log.Log.Info("HandleRecordStream: disabled, we will not record anything.")
log.Log.Info("capture.main.HandleRecordStream(): disabled, we will not record anything.")
} else {
log.Log.Debug("HandleRecordStream: started")
log.Log.Debug("capture.main.HandleRecordStream(): started")
recordingPeriod := config.Capture.PostRecording // number of seconds to record.
maxRecordingPeriod := config.Capture.MaxLengthRecording // maximum number of seconds to record.
preRecording := config.Capture.PreRecording * 1000
postRecording := config.Capture.PostRecording * 1000 // number of seconds to record.
maxRecordingPeriod := config.Capture.MaxLengthRecording * 1000 // maximum number of seconds to record.
// Synchronise the last synced time
now := time.Now().Unix()
startRecording := now
timestamp := now
// We will calculate the maxRecordingPeriod based on the preRecording and postRecording values.
if maxRecordingPeriod == 0 {
// If maxRecordingPeriod is not set, we will use the preRecording and postRecording values
maxRecordingPeriod = preRecording + postRecording
}
if maxRecordingPeriod < preRecording+postRecording {
log.Log.Error("capture.main.HandleRecordStream(): maxRecordingPeriod is less than preRecording + postRecording, this is not allowed. Setting maxRecordingPeriod to preRecording + postRecording.")
maxRecordingPeriod = preRecording + postRecording
}
if config.FriendlyName != "" {
config.Name = config.FriendlyName
}
// Get the audio and video codec from the camera.
// We only expect one audio and one video codec.
// If there are multiple audio or video streams, we will use the first one.
audioCodec := ""
videoCodec := ""
audioStreams, _ := rtspClient.GetAudioStreams()
videoStreams, _ := rtspClient.GetVideoStreams()
if len(audioStreams) > 0 {
audioCodec = audioStreams[0].Name
config.Capture.IPCamera.SampleRate = audioStreams[0].SampleRate
config.Capture.IPCamera.Channels = audioStreams[0].Channels
}
if len(videoStreams) > 0 {
videoCodec = videoStreams[0].Name
}
// Check if continuous recording.
if config.Capture.Continuous == "true" {
// Do not do anything!
log.Log.Info("HandleRecordStream: Start continuous recording ")
loc, _ := time.LoadLocation(config.Timezone)
now = time.Now().Unix()
timestamp = now
start := false
//var cws *cacheWriterSeeker
var mp4Video *video.MP4
var videoTrack uint32
var audioTrack uint32
var name string
var myMuxer *mp4.Muxer
var file *os.File
var err error
// Do not do anything!
log.Log.Info("capture.main.HandleRecordStream(continuous): start recording")
start := false
// If continuous record the full length
recordingPeriod = maxRecordingPeriod
postRecording = maxRecordingPeriod
// Recording file name
fullName := ""
var startRecording int64 = 0 // start recording timestamp in milliseconds
// Get as much packets we need.
//for pkt := range packets {
var cursorError error
var pkt av.Packet
var nextPkt av.Packet
var pkt packets.Packet
var nextPkt packets.Packet
recordingStatus := "idle"
recordingCursor := queue.Oldest()
@@ -104,33 +135,101 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
nextPkt, cursorError = recordingCursor.ReadPacket()
now := time.Now().Unix()
now := time.Now().UnixMilli()
if start && // If already recording and current frame is a keyframe and we should stop recording
nextPkt.IsKeyFrame && (timestamp+recordingPeriod-now <= 0 || now-startRecording >= maxRecordingPeriod) {
nextPkt.IsKeyFrame && (startRecording+postRecording-now <= 0 || now-startRecording > maxRecordingPeriod-500) {
// Write the last packet
if err := myMuxer.WritePacket(pkt); err != nil {
log.Log.Error(err.Error())
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
// Write the last packet
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
// Write the last packet
if pkt.Codec == "AAC" {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
}
// This will write the trailer a well.
if err := myMuxer.WriteTrailerWithPacket(nextPkt); err != nil {
log.Log.Error(err.Error())
// Close mp4
if len(mp4Video.SPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.SPSNALUs) > 0 {
mp4Video.SPSNALUs = configuration.Config.Capture.IPCamera.SPSNALUs
}
log.Log.Info("HandleRecordStream: Recording finished: file save: " + name)
if len(mp4Video.PPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.PPSNALUs) > 0 {
mp4Video.PPSNALUs = configuration.Config.Capture.IPCamera.PPSNALUs
}
if len(mp4Video.VPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.VPSNALUs) > 0 {
mp4Video.VPSNALUs = configuration.Config.Capture.IPCamera.VPSNALUs
}
if (videoCodec == "H264" && (len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) ||
(videoCodec == "H265" && (len(mp4Video.VPSNALUs) == 0 || len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) {
log.Log.Warning("capture.main.HandleRecordStream(continuous): closing MP4 without full parameter sets, moov may be incomplete")
}
mp4Video.Close(&config)
log.Log.Info("capture.main.HandleRecordStream(continuous): recording finished: file save: " + name)
// Cleanup muxer
start = false
myMuxer.Close()
myMuxer = nil
file.Close()
file = nil
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Update the name with the duration in milliseconds.
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" +
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" +
config.Name + "_" +
"0-0-0-0" + "_" + // region coordinates, we
"-1" + "_" + // token
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(continuous): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
if config.Encryption != nil && config.Encryption.Enabled == "true" && config.Encryption.Recordings == "true" && config.Encryption.SymmetricKey != "" {
// reopen file into memory 'fullName'
contents, err := os.ReadFile(fullName)
if err == nil {
// encrypt
encryptedContents, err := encryption.AesEncrypt(contents, config.Encryption.SymmetricKey)
if err == nil {
// write back to file
err := os.WriteFile(fullName, []byte(encryptedContents), 0644)
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): error writing file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(continuous): error encrypting file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(continuous): error reading file: " + err.Error())
}
}
// Create a symbol link.
@@ -146,33 +245,16 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
// If not yet started and a keyframe, let's make a recording
if !start && pkt.IsKeyFrame {
// Check if within time interval
nowInTimezone := time.Now().In(loc)
weekday := nowInTimezone.Weekday()
hour := nowInTimezone.Hour()
minute := nowInTimezone.Minute()
second := nowInTimezone.Second()
timeEnabled := config.Time
timeInterval := config.Timetable[int(weekday)]
if timeEnabled == "true" && timeInterval != nil {
start1 := timeInterval.Start1
end1 := timeInterval.End1
start2 := timeInterval.Start2
end2 := timeInterval.End2
currentTimeInSeconds := hour*60*60 + minute*60 + second
if (currentTimeInSeconds >= start1 && currentTimeInSeconds <= end1) ||
(currentTimeInSeconds >= start2 && currentTimeInSeconds <= end2) {
} else {
log.Log.Debug("HandleRecordStream: Disabled: no continuous recording at this moment. Not within specified time interval.")
time.Sleep(5 * time.Second)
continue
}
// We might have different conditions enabled such as time window or uri response.
// We'll validate those conditions and if not valid we'll not do anything.
valid, err := conditions.Validate(loc, configuration)
if !valid && err != nil {
log.Log.Debug("capture.main.HandleRecordStream(continuous): " + err.Error() + ".")
time.Sleep(5 * time.Second)
continue
}
start = true
timestamp = now
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
// 1564859471_6-474162_oprit_577-283-727-375_1153_27.mp4
@@ -183,55 +265,90 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
// - Number of changes
// - Token
startRecording = time.Now().Unix() // we mark the current time when the record started.ss
s := strconv.FormatInt(startRecording, 10) + "_" +
"6" + "-" +
"967003" + "_" +
config.Name + "_" +
"200-200-400-400" + "_0_" +
"769"
startRecording = pkt.CurrentTime
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" + // start timestamp in seconds
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" + // length of milliseconds
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" + // milliseconds
config.Name + "_" + // device name
"0-0-0-0" + "_" + // region coordinates, we will not use this for continuous recording
"0" + "_" + // token
"0" + "_" //+ // duration of recording in milliseconds
//utils.VERSION // version of the agent
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
// Running...
log.Log.Info("Recording started")
log.Log.Info("capture.main.HandleRecordStream(continuous): recording started")
file, err = os.Create(fullName)
if err == nil {
myMuxer = mp4.NewMuxer(file)
// Get width and height from the camera.
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
// Get SPS and PPS NALUs from the camera.
spsNALUS := configuration.Config.Capture.IPCamera.SPSNALUs
ppsNALUS := configuration.Config.Capture.IPCamera.PPSNALUs
vpsNALUS := configuration.Config.Capture.IPCamera.VPSNALUs
if len(spsNALUS) == 0 || len(ppsNALUS) == 0 {
log.Log.Warning("capture.main.HandleRecordStream(continuous): missing SPS/PPS at recording start")
}
// Create a video file, and set the dimensions.
mp4Video = video.NewMP4(fullName, spsNALUS, ppsNALUS, vpsNALUS, configuration.Config.Capture.MaxLengthRecording)
mp4Video.SetWidth(width)
mp4Video.SetHeight(height)
if videoCodec == "H264" {
videoTrack = mp4Video.AddVideoTrack("H264")
} else if videoCodec == "H265" {
videoTrack = mp4Video.AddVideoTrack("H265")
}
if audioCodec == "AAC" {
audioTrack = mp4Video.AddAudioTrack("AAC")
} else if audioCodec == "PCM_MULAW" {
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
log.Log.Info("HandleRecordStream: composing recording")
log.Log.Info("HandleRecordStream: write header")
// Creating the file, might block sometimes.
if err := myMuxer.WriteHeader(streams); err != nil {
log.Log.Error(err.Error())
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
if pkt.Codec == "AAC" {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
// We might need to use ffmpeg to transcode the audio to AAC.
// For now we will skip the audio track.
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
}
if err := myMuxer.WritePacket(pkt); err != nil {
log.Log.Error(err.Error())
}
recordingStatus = "started"
} else if start {
if err := myMuxer.WritePacket(pkt); err != nil {
log.Log.Error(err.Error())
}
// We will sync to file every keyframe.
if pkt.IsKeyFrame {
err := file.Sync()
if err != nil {
log.Log.Error(err.Error())
} else {
log.Log.Info("HandleRecordStream: Synced file: " + name)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
// New method using new mp4 library
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
if pkt.Codec == "AAC" {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
}
}
pkt = nextPkt
}
@@ -240,22 +357,63 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
if cursorError != nil {
if recordingStatus == "started" {
// This will write the trailer a well.
if err := myMuxer.WriteTrailer(); err != nil {
log.Log.Error(err.Error())
}
log.Log.Info("capture.main.HandleRecordStream(continuous): Recording finished: file save: " + name)
log.Log.Info("HandleRecordStream: Recording finished: file save: " + name)
// Cleanup muxer
start = false
myMuxer.Close()
myMuxer = nil
file.Close()
file = nil
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Update the name with the duration in milliseconds.
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" +
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" +
config.Name + "_" +
"0-0-0-0" + "_" + // region coordinates, we
"-1" + "_" + // token
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(continuous): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
if config.Encryption != nil && config.Encryption.Enabled == "true" && config.Encryption.Recordings == "true" && config.Encryption.SymmetricKey != "" {
// reopen file into memory 'fullName'
contents, err := os.ReadFile(fullName)
if err == nil {
// encrypt
encryptedContents, err := encryption.AesEncrypt(contents, config.Encryption.SymmetricKey)
if err == nil {
// write back to file
err := os.WriteFile(fullName, []byte(encryptedContents), 0644)
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error writing file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error encrypting file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error reading file: " + err.Error())
}
}
// Create a symbol link.
@@ -263,38 +421,53 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
fc.Close()
recordingStatus = "idle"
// Clean up the recording directory if necessary.
CleanupRecordingDirectory(configDirectory, configuration)
}
}
} else {
log.Log.Info("HandleRecordStream: Start motion based recording ")
log.Log.Info("capture.main.HandleRecordStream(motiondetection): Start motion based recording ")
var myMuxer *mp4.Muxer
var file *os.File
var err error
var lastRecordingTime int64 = 0 // last recording timestamp in milliseconds
var displayTime int64 = 0 // display time in milliseconds
var lastDuration time.Duration
var lastRecordingTime int64
var videoTrack uint32
var audioTrack uint32
for motion := range communication.HandleMotion {
timestamp = time.Now().Unix()
startRecording = time.Now().Unix() // we mark the current time when the record started.
numberOfChanges := motion.NumberOfChanges
// Get as much packets we need.
var cursorError error
var pkt packets.Packet
var nextPkt packets.Packet
recordingCursor := queue.Oldest() // Start from the latest packet in the queue)
// If we have prerecording we will substract the number of seconds.
// Taking into account FPS = GOP size (Keyfram interval)
if config.Capture.PreRecording > 0 {
now := time.Now().UnixMilli()
motionTimestamp := now
// Might be that recordings are coming short after each other.
// Therefore we do some math with the current time and the last recording time.
start := false
timeBetweenNowAndLastRecording := startRecording - lastRecordingTime
if timeBetweenNowAndLastRecording > int64(config.Capture.PreRecording) {
startRecording = startRecording - int64(config.Capture.PreRecording) + 1
} else {
startRecording = startRecording - timeBetweenNowAndLastRecording
}
if cursorError == nil {
pkt, cursorError = recordingCursor.ReadPacket()
}
displayTime = pkt.CurrentTime
startRecording := pkt.CurrentTime
// We have more packets in the queue (which might still be older than where we close the previous recording).
// In that case we will use the last recording time to determine the start time of the recording, otherwise
// we will have duplicate frames in the recording.
if startRecording < lastRecordingTime {
displayTime = lastRecordingTime
startRecording = lastRecordingTime
}
// If startRecording is 0, we will continue as it might be we are in a state of restarting the agent.
if startRecording == 0 {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): startRecording is 0, we will continue as it might be we are in a state of restarting the agent.")
continue
}
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
@@ -306,80 +479,119 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
// - Number of changes
// - Token
s := strconv.FormatInt(startRecording, 10) + "_" +
"6" + "-" +
"967003" + "_" +
config.Name + "_" +
"200-200-400-400" + "_" +
strconv.Itoa(numberOfChanges) + "_" +
"769"
displayTimeSeconds := displayTime / 1000 // convert to seconds
displayTimeMilliseconds := displayTime % 1000 // convert to milliseconds
motionRectangleString := "0-0-0-0"
if motion.Rectangle.X != 0 || motion.Rectangle.Y != 0 ||
motion.Rectangle.Width != 0 || motion.Rectangle.Height != 0 {
motionRectangleString = strconv.Itoa(motion.Rectangle.X) + "-" + strconv.Itoa(motion.Rectangle.Y) + "-" +
strconv.Itoa(motion.Rectangle.Width) + "-" + strconv.Itoa(motion.Rectangle.Height)
}
// Get the number of changes from the motion detection.
numberOfChanges := motion.NumberOfChanges
s := strconv.FormatInt(displayTimeSeconds, 10) + "_" + // start timestamp in seconds
strconv.Itoa(len(strconv.FormatInt(displayTimeMilliseconds, 10))) + "-" + // length of milliseconds
strconv.FormatInt(displayTimeMilliseconds, 10) + "_" + // milliseconds
config.Name + "_" + // device name
motionRectangleString + "_" + // region coordinates, we will not use this for continuous recording
strconv.Itoa(numberOfChanges) + "_" + // number of changes
"0" // + "_" + // duration of recording in milliseconds
//utils.VERSION // version of the agent
name := s + ".mp4"
fullName := configDirectory + "/data/recordings/" + name
// Running...
log.Log.Info("HandleRecordStream: Recording started")
file, err = os.Create(fullName)
if err == nil {
myMuxer = mp4.NewMuxer(file)
}
start := false
log.Log.Info("HandleRecordStream: composing recording")
log.Log.Info("HandleRecordStream: write header")
// Creating the file, might block sometimes.
if err := myMuxer.WriteHeader(streams); err != nil {
log.Log.Error(err.Error())
}
// Get as much packets we need.
var cursorError error
var pkt av.Packet
var nextPkt av.Packet
recordingCursor := queue.DelayedGopCount(int(config.Capture.PreRecording))
if cursorError == nil {
pkt, cursorError = recordingCursor.ReadPacket()
log.Log.Info("capture.main.HandleRecordStream(motiondetection): recording started (" + name + ")" + " at " + strconv.FormatInt(displayTimeSeconds, 10) + " unix")
// Get width and height from the camera.
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
// Get SPS and PPS NALUs from the camera.
spsNALUS := configuration.Config.Capture.IPCamera.SPSNALUs
ppsNALUS := configuration.Config.Capture.IPCamera.PPSNALUs
vpsNALUS := configuration.Config.Capture.IPCamera.VPSNALUs
if len(spsNALUS) == 0 || len(ppsNALUS) == 0 {
log.Log.Warning("capture.main.HandleRecordStream(motiondetection): missing SPS/PPS at recording start")
}
// Create the MP4 only once the first keyframe arrives.
var mp4Video *video.MP4
for cursorError == nil {
nextPkt, cursorError = recordingCursor.ReadPacket()
if cursorError != nil {
log.Log.Error("HandleRecordStream: " + cursorError.Error())
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + cursorError.Error())
}
now := time.Now().Unix()
now = time.Now().UnixMilli()
select {
case motion := <-communication.HandleMotion:
timestamp = now
log.Log.Info("HandleRecordStream: motion detected while recording. Expanding recording.")
numberOfChanges = motion.NumberOfChanges
log.Log.Info("Received message with recording data, detected changes to save: " + strconv.Itoa(numberOfChanges))
motionTimestamp = now
log.Log.Info("capture.main.HandleRecordStream(motiondetection): motion detected while recording. Expanding recording.")
numberOfChanges := motion.NumberOfChanges
log.Log.Info("capture.main.HandleRecordStream(motiondetection): Received message with recording data, detected changes to save: " + strconv.Itoa(numberOfChanges))
default:
}
if (timestamp+recordingPeriod-now < 0 || now-startRecording > maxRecordingPeriod) && nextPkt.IsKeyFrame {
log.Log.Info("HandleRecordStream: closing recording (timestamp: " + strconv.FormatInt(timestamp, 10) + ", recordingPeriod: " + strconv.FormatInt(recordingPeriod, 10) + ", now: " + strconv.FormatInt(now, 10) + ", startRecording: " + strconv.FormatInt(startRecording, 10) + ", maxRecordingPeriod: " + strconv.FormatInt(maxRecordingPeriod, 10))
if start && (motionTimestamp+postRecording-now < 0 || now-startRecording > maxRecordingPeriod-500) && nextPkt.IsKeyFrame {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): timestamp+postRecording-now < 0 - " + strconv.FormatInt(motionTimestamp+postRecording-now, 10) + " < 0")
log.Log.Info("capture.main.HandleRecordStream(motiondetection): now-startRecording > maxRecordingPeriod-500 - " + strconv.FormatInt(now-startRecording, 10) + " > " + strconv.FormatInt(maxRecordingPeriod-500, 10))
log.Log.Info("capture.main.HandleRecordStream(motiondetection): closing recording (timestamp: " + strconv.FormatInt(motionTimestamp, 10) + ", postRecording: " + strconv.FormatInt(postRecording, 10) + ", now: " + strconv.FormatInt(now, 10) + ", startRecording: " + strconv.FormatInt(startRecording, 10) + ", maxRecordingPeriod: " + strconv.FormatInt(maxRecordingPeriod, 10))
break
}
if pkt.IsKeyFrame && !start && pkt.Time >= lastDuration {
log.Log.Info("HandleRecordStream: write frames")
if pkt.IsKeyFrame && !start && pkt.CurrentTime >= startRecording {
// We start the recording if we have a keyframe and the last duration is 0 or less than the current packet time.
// It could be start we start from the beginning of the recording.
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): write frames")
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): recording started on keyframe")
// Align duration timers with the first keyframe.
startRecording = pkt.CurrentTime
// Create a video file, and set the dimensions.
mp4Video = video.NewMP4(fullName, spsNALUS, ppsNALUS, vpsNALUS, configuration.Config.Capture.MaxLengthRecording)
mp4Video.SetWidth(width)
mp4Video.SetHeight(height)
if videoCodec == "H264" {
videoTrack = mp4Video.AddVideoTrack("H264")
} else if videoCodec == "H265" {
videoTrack = mp4Video.AddVideoTrack("H265")
}
if audioCodec == "AAC" {
audioTrack = mp4Video.AddAudioTrack("AAC")
} else if audioCodec == "PCM_MULAW" {
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
start = true
}
if start {
if err := myMuxer.WritePacket(pkt); err != nil {
log.Log.Error(err.Error())
}
// We will sync to file every keyframe.
if pkt.IsKeyFrame {
err := file.Sync()
if err != nil {
log.Log.Error(err.Error())
} else {
log.Log.Info("HandleRecordStream: Synced file: " + name)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): add video sample")
if mp4Video != nil {
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + err.Error())
}
}
} else if pkt.IsAudio {
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): add audio sample")
if pkt.Codec == "AAC" {
if mp4Video != nil {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + err.Error())
}
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
// We might need to use ffmpeg to transcode the audio to AAC.
// For now we will skip the audio track.
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): no AAC audio codec detected, skipping audio track.")
}
}
}
@@ -387,22 +599,83 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
pkt = nextPkt
}
// This will write the trailer as well.
myMuxer.WriteTrailerWithPacket(nextPkt)
log.Log.Info("HandleRecordStream: file save: " + name)
// Update the last duration and last recording time.
// This is used to determine if we need to start a new recording.
lastRecordingTime = pkt.CurrentTime
lastDuration = pkt.Time
lastRecordingTime = time.Now().Unix()
if mp4Video == nil {
log.Log.Warning("capture.main.HandleRecordStream(motiondetection): recording closed without keyframe; no MP4 created")
continue
}
// Cleanup muxer
myMuxer.Close()
myMuxer = nil
file.Close()
file = nil
// This will close the recording and write the last packet.
if len(mp4Video.SPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.SPSNALUs) > 0 {
mp4Video.SPSNALUs = configuration.Config.Capture.IPCamera.SPSNALUs
}
if len(mp4Video.PPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.PPSNALUs) > 0 {
mp4Video.PPSNALUs = configuration.Config.Capture.IPCamera.PPSNALUs
}
if len(mp4Video.VPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.VPSNALUs) > 0 {
mp4Video.VPSNALUs = configuration.Config.Capture.IPCamera.VPSNALUs
}
if (videoCodec == "H264" && (len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) ||
(videoCodec == "H265" && (len(mp4Video.VPSNALUs) == 0 || len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) {
log.Log.Warning("capture.main.HandleRecordStream(motiondetection): closing MP4 without full parameter sets, moov may be incomplete")
}
mp4Video.Close(&config)
log.Log.Info("capture.main.HandleRecordStream(motiondetection): file save: " + name)
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Update the name with the duration in milliseconds.
s := strconv.FormatInt(displayTimeSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(displayTimeMilliseconds, 10))) + "-" +
strconv.FormatInt(displayTimeMilliseconds, 10) + "_" +
config.Name + "_" +
motionRectangleString + "_" +
strconv.Itoa(numberOfChanges) + "_" + // number of changes
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording in milliseconds
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
if config.Encryption != nil && config.Encryption.Enabled == "true" && config.Encryption.Recordings == "true" && config.Encryption.SymmetricKey != "" {
// reopen file into memory 'fullName'
contents, err := os.ReadFile(fullName)
if err == nil {
// encrypt
encryptedContents, err := encryption.AesEncrypt(contents, config.Encryption.SymmetricKey)
if err == nil {
// write back to file
err := os.WriteFile(fullName, []byte(encryptedContents), 0644)
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error writing file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error encrypting file: " + err.Error())
}
} else {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error reading file: " + err.Error())
}
}
// Create a symbol linc.
@@ -414,7 +687,7 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
}
}
log.Log.Debug("HandleRecordStream: finished")
log.Log.Debug("capture.main.HandleRecordStream(): finished")
}
}
@@ -429,6 +702,10 @@ func HandleRecordStream(queue *pubsub.Queue, configDirectory string, configurati
// @Success 200 {object} models.APIResponse
func VerifyCamera(c *gin.Context) {
// Start OpenTelemetry tracing
ctxVerifyCamera, span := tracer.Start(context.Background(), "VerifyCamera", trace.WithSpanKind(trace.SpanKindServer))
defer span.End()
var cameraStreams models.CameraStreams
err := c.BindJSON(&cameraStreams)
@@ -447,30 +724,45 @@ func VerifyCamera(c *gin.Context) {
if streamType == "secondary" {
rtspUrl = cameraStreams.SubRTSP
}
_, codecs, err := OpenRTSP(ctx, rtspUrl)
// Currently only support H264 encoded cameras, this will change.
// Establishing the camera connection without backchannel if no substream
rtspClient := &Golibrtsp{
Url: rtspUrl,
}
err := rtspClient.Connect(ctx, ctxVerifyCamera)
if err == nil {
// Get the streams from the rtsp client.
streams, _ := rtspClient.GetStreams()
videoIdx := -1
audioIdx := -1
for i, codec := range codecs {
if codec.Type().String() == "H264" && videoIdx < 0 {
for i, stream := range streams {
if (stream.Name == "H264" || stream.Name == "H265") && videoIdx < 0 {
videoIdx = i
} else if codec.Type().String() == "PCM_MULAW" && audioIdx < 0 {
} else if stream.Name == "PCM_MULAW" && audioIdx < 0 {
audioIdx = i
}
}
if videoIdx > -1 {
c.JSON(200, models.APIResponse{
Message: "All good, detected a H264 codec.",
Data: codecs,
})
err := rtspClient.Close(ctxVerifyCamera)
if err == nil {
if videoIdx > -1 {
c.JSON(200, models.APIResponse{
Message: "All good, detected a H264 codec.",
Data: streams,
})
} else {
c.JSON(400, models.APIResponse{
Message: "Stream doesn't have a H264 codec, we only support H264 so far.",
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Stream doesn't have a H264 codec, we only support H264 so far.",
Message: "Something went wrong while closing the connection " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: err.Error(),
@@ -482,3 +774,99 @@ func VerifyCamera(c *gin.Context) {
})
}
}
func Base64Image(captureDevice *Capture, communication *models.Communication, configuration *models.Configuration) string {
// We'll try to get a snapshot from the camera.
var queue *packets.Queue
var cursor *packets.QueueCursor
// We'll pick the right client and decoder.
rtspClient := captureDevice.RTSPSubClient
if rtspClient != nil {
queue = communication.SubQueue
cursor = queue.Latest()
} else {
rtspClient = captureDevice.RTSPClient
queue = communication.Queue
cursor = queue.Latest()
}
// We'll try to have a keyframe, if not we'll return an empty string.
var encodedImage string
// Try for 3 times in a row.
count := 0
for count < 3 {
if queue != nil && cursor != nil && rtspClient != nil {
pkt, err := cursor.ReadPacket()
if err == nil {
if !pkt.IsKeyFrame {
continue
}
var img image.YCbCr
img, err = (*rtspClient).DecodePacket(pkt)
if err == nil {
imageResized, _ := utils.ResizeImage(&img, uint(configuration.Config.Capture.IPCamera.BaseWidth), uint(configuration.Config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
encodedImage = base64.StdEncoding.EncodeToString(bytes)
break
} else {
count++
continue
}
}
} else {
break
}
}
return encodedImage
}
func JpegImage(captureDevice *Capture, communication *models.Communication) image.YCbCr {
// We'll try to get a snapshot from the camera.
var queue *packets.Queue
var cursor *packets.QueueCursor
// We'll pick the right client and decoder.
rtspClient := captureDevice.RTSPSubClient
if rtspClient != nil {
queue = communication.SubQueue
cursor = queue.Latest()
} else {
rtspClient = captureDevice.RTSPClient
queue = communication.Queue
cursor = queue.Latest()
}
// We'll try to have a keyframe, if not we'll return an empty string.
var image image.YCbCr
// Try for 3 times in a row.
count := 0
for count < 3 {
if queue != nil && cursor != nil && rtspClient != nil {
pkt, err := cursor.ReadPacket()
if err == nil {
if !pkt.IsKeyFrame {
continue
}
image, err = (*rtspClient).DecodePacket(pkt)
if err != nil {
count++
continue
} else {
break
}
}
} else {
break
}
}
return image
}
func convertPTS(v time.Duration) uint64 {
return uint64(v.Milliseconds())
}
/*func convertPTS2(v int64) uint64 {
return uint64(v) / 100
}*/

File diff suppressed because it is too large Load Diff

View File

@@ -1,105 +0,0 @@
package cloud
import (
"crypto/tls"
"errors"
"io/ioutil"
"net/http"
"os"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
func UploadKerberosVault(configuration *models.Configuration, fileName string) (bool, bool, error) {
config := configuration.Config
if config.KStorage.AccessKey == "" ||
config.KStorage.SecretAccessKey == "" ||
config.KStorage.Directory == "" ||
config.KStorage.URI == "" {
err := "UploadKerberosVault: Kerberos Vault not properly configured."
log.Log.Info(err)
return false, false, errors.New(err)
}
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
// 1564859471_6-474162_oprit_577-283-727-375_1153_27.mp4
// - Timestamp
// - Size + - + microseconds
// - device
// - Region
// - Number of changes
// - Token
// KerberosCloud, this means storage is disabled and proxy enabled.
log.Log.Info("UploadKerberosVault: Uploading to Kerberos Vault (" + config.KStorage.URI + ")")
log.Log.Info("UploadKerberosVault: Upload started for " + fileName)
fullname := "data/recordings/" + fileName
file, err := os.OpenFile(fullname, os.O_RDWR, 0755)
if file != nil {
defer file.Close()
}
if err != nil {
err := "UploadKerberosVault: Upload Failed, file doesn't exists anymore."
log.Log.Info(err)
return false, false, errors.New(err)
}
publicKey := config.KStorage.CloudKey
// This is the new way ;)
if config.HubKey != "" {
publicKey = config.HubKey
}
req, err := http.NewRequest("POST", config.KStorage.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault: error reading request, " + config.KStorage.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
req.Header.Set("Content-Type", "video/mp4")
req.Header.Set("X-Kerberos-Storage-CloudKey", publicKey)
req.Header.Set("X-Kerberos-Storage-AccessKey", config.KStorage.AccessKey)
req.Header.Set("X-Kerberos-Storage-SecretAccessKey", config.KStorage.SecretAccessKey)
req.Header.Set("X-Kerberos-Storage-Provider", config.KStorage.Provider)
req.Header.Set("X-Kerberos-Storage-FileName", fileName)
req.Header.Set("X-Kerberos-Storage-Device", config.Key)
req.Header.Set("X-Kerberos-Storage-Capture", "IPCamera")
req.Header.Set("X-Kerberos-Storage-Directory", config.KStorage.Directory)
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
resp, err := client.Do(req)
if resp != nil {
defer resp.Body.Close()
}
if err == nil {
if resp != nil {
body, err := ioutil.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
log.Log.Info("UploadKerberosVault: Upload Finished, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
log.Log.Info("UploadKerberosVault: Upload Failed, " + resp.Status + ", " + string(body))
return false, true, nil
}
}
}
}
errorMessage := "UploadKerberosVault: Upload Failed, " + err.Error()
log.Log.Info(errorMessage)
return false, true, errors.New(errorMessage)
}

View File

@@ -0,0 +1,194 @@
package cloud
import (
"crypto/tls"
"errors"
"io"
"net/http"
"os"
"time"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
// We will count the number of retries we have done.
// If we have done more than "kstorageRetryPolicy" retries, we will stop, and start sending to the secondary storage.
var kstorageRetryCount = 0
var kstorageRetryTimeout = time.Now().Unix()
func UploadKerberosVault(configuration *models.Configuration, fileName string) (bool, bool, error) {
config := configuration.Config
if config.KStorage.AccessKey == "" ||
config.KStorage.SecretAccessKey == "" ||
config.KStorage.Directory == "" ||
config.KStorage.URI == "" {
err := "UploadKerberosVault: Kerberos Vault not properly configured"
log.Log.Info(err)
return false, false, errors.New(err)
}
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
// 1564859471_6-474162_oprit_577-283-727-375_1153_27.mp4
// - Timestamp
// - Size + - + microseconds
// - device
// - Region
// - Number of changes
// - Token
// KerberosCloud, this means storage is disabled and proxy enabled.
log.Log.Info("UploadKerberosVault: Uploading to Kerberos Vault (" + config.KStorage.URI + ")")
log.Log.Info("UploadKerberosVault: Upload started for " + fileName)
fullname := "data/recordings/" + fileName
file, err := os.OpenFile(fullname, os.O_RDWR, 0755)
if file != nil {
defer file.Close()
}
if err != nil {
err := "UploadKerberosVault: Upload Failed, file doesn't exists anymore"
log.Log.Info(err)
return false, false, errors.New(err)
}
publicKey := config.KStorage.CloudKey
if config.HubKey != "" {
publicKey = config.HubKey
}
// We need to check if we are in a retry timeout.
if kstorageRetryTimeout <= time.Now().Unix() {
req, err := http.NewRequest("POST", config.KStorage.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault: error reading request, " + config.KStorage.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
req.Header.Set("Content-Type", "video/mp4")
req.Header.Set("X-Kerberos-Storage-CloudKey", publicKey)
req.Header.Set("X-Kerberos-Storage-AccessKey", config.KStorage.AccessKey)
req.Header.Set("X-Kerberos-Storage-SecretAccessKey", config.KStorage.SecretAccessKey)
req.Header.Set("X-Kerberos-Storage-Provider", config.KStorage.Provider)
req.Header.Set("X-Kerberos-Storage-FileName", fileName)
req.Header.Set("X-Kerberos-Storage-Device", config.Key)
req.Header.Set("X-Kerberos-Storage-Capture", "IPCamera")
req.Header.Set("X-Kerberos-Storage-Directory", config.KStorage.Directory)
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
resp, err := client.Do(req)
if resp != nil {
defer resp.Body.Close()
}
if err == nil {
if resp != nil {
body, err := io.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
kstorageRetryCount = 0
log.Log.Info("UploadKerberosVault: Upload Finished, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
// We increase the retry count, and set the timeout.
// If we have reached the retry policy, we set the timeout.
// This means we will not retry for the next 5 minutes.
if kstorageRetryCount < config.KStorage.MaxRetries {
kstorageRetryCount = (kstorageRetryCount + 1)
}
if kstorageRetryCount == config.KStorage.MaxRetries {
kstorageRetryTimeout = time.Now().Add(time.Duration(config.KStorage.Timeout) * time.Second).Unix()
}
log.Log.Info("UploadKerberosVault: Upload Failed, " + resp.Status + ", " + string(body))
}
}
}
} else {
log.Log.Info("UploadKerberosVault: Upload Failed, " + err.Error())
}
}
// We might need to check if we can upload to our secondary storage.
if config.KStorageSecondary.AccessKey == "" ||
config.KStorageSecondary.SecretAccessKey == "" ||
config.KStorageSecondary.Directory == "" ||
config.KStorageSecondary.URI == "" {
log.Log.Info("UploadKerberosVault (Secondary): Secondary Kerberos Vault not properly configured.")
} else {
if kstorageRetryCount < config.KStorage.MaxRetries {
log.Log.Info("UploadKerberosVault (Secondary): Do not upload to secondary storage, we are still in retry policy.")
return false, true, nil
}
log.Log.Info("UploadKerberosVault (Secondary): Uploading to Secondary Kerberos Vault (" + config.KStorageSecondary.URI + ")")
file, err = os.OpenFile(fullname, os.O_RDWR, 0755)
if file != nil {
defer file.Close()
}
if err != nil {
err := "UploadKerberosVault (Secondary): Upload Failed, file doesn't exists anymore"
log.Log.Info(err)
return false, false, errors.New(err)
}
req, err := http.NewRequest("POST", config.KStorageSecondary.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault (Secondary): error reading request, " + config.KStorageSecondary.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
req.Header.Set("Content-Type", "video/mp4")
req.Header.Set("X-Kerberos-Storage-CloudKey", publicKey)
req.Header.Set("X-Kerberos-Storage-AccessKey", config.KStorageSecondary.AccessKey)
req.Header.Set("X-Kerberos-Storage-SecretAccessKey", config.KStorageSecondary.SecretAccessKey)
req.Header.Set("X-Kerberos-Storage-Provider", config.KStorageSecondary.Provider)
req.Header.Set("X-Kerberos-Storage-FileName", fileName)
req.Header.Set("X-Kerberos-Storage-Device", config.Key)
req.Header.Set("X-Kerberos-Storage-Capture", "IPCamera")
req.Header.Set("X-Kerberos-Storage-Directory", config.KStorageSecondary.Directory)
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
resp, err := client.Do(req)
if resp != nil {
defer resp.Body.Close()
}
if err == nil {
if resp != nil {
body, err := io.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
log.Log.Info("UploadKerberosVault (Secondary): Upload Finished to secondary, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
log.Log.Info("UploadKerberosVault (Secondary): Upload Failed to secondary, " + resp.Status + ", " + string(body))
}
}
}
}
}
return false, true, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +0,0 @@
package components
import (
"time"
"github.com/cedricve/go-onvif"
"github.com/kerberos-io/agent/machinery/src/log"
)
func Discover(timeout time.Duration) {
log.Log.Info("Discovering devices")
log.Log.Info("Waiting for " + (timeout * time.Second).String())
devices, err := onvif.StartDiscovery(timeout * time.Second)
if err != nil {
log.Log.Error(err.Error())
} else {
for _, device := range devices {
hostname, _ := device.GetHostname()
log.Log.Info(hostname.Name)
}
if len(devices) == 0 {
log.Log.Info("No devices descovered\n")
}
}
}

View File

@@ -1,93 +0,0 @@
package components
import (
"fmt"
"image"
"image/jpeg"
"log"
"time"
"github.com/deepch/vdk/av"
"github.com/deepch/vdk/codec/h264parser"
"github.com/deepch/vdk/format/rtsp"
"github.com/nsmith5/mjpeg"
)
type Stream struct {
Name string
Url string
Debug bool
Codecs string
}
func CreateStream(name string, url string) *Stream {
return &Stream{
Name: name,
Url: url,
}
}
func (s Stream) Open() *rtsp.Client {
// Enable debugging
if s.Debug {
rtsp.DebugRtsp = true
}
fmt.Println("Dialing in to " + s.Url)
session, err := rtsp.Dial(s.Url)
if err != nil {
log.Println("Something went wrong dialing into stream: ", err)
time.Sleep(5 * time.Second)
}
session.RtpKeepAliveTimeout = 10 * time.Second
return session
}
func (s Stream) Close(session *rtsp.Client) {
fmt.Println("Closing RTSP session.")
err := session.Close()
if err != nil {
log.Println("Something went wrong while closing your RTSP session: ", err)
}
}
func (s Stream) GetCodecs() []av.CodecData {
session := s.Open()
codec, err := session.Streams()
log.Println("Reading codecs from stream: ", codec)
if err != nil {
log.Println("Something went wrong while reading codecs from stream: ", err)
time.Sleep(5 * time.Second)
}
s.Close(session)
return codec
}
func (s Stream) ReadPackets(packetChannel chan av.Packet) {
session := s.Open()
for {
packet, err := session.ReadPacket()
if err != nil {
break
}
if len(packetChannel) < cap(packetChannel) {
packetChannel <- packet
}
}
s.Close(session)
}
func GetSPSFromCodec(codecs []av.CodecData) ([]byte, []byte) {
sps := codecs[0].(h264parser.CodecData).SPS()
pps := codecs[0].(h264parser.CodecData).PPS()
return sps, pps
}
func StartMotionJPEG(imageFunction func() (image.Image, error), quality int) mjpeg.Handler {
stream := mjpeg.Handler{
Next: imageFunction,
Options: &jpeg.Options{Quality: quality},
}
return stream
}

View File

@@ -0,0 +1,95 @@
package components
import (
"bufio"
"fmt"
"os"
"time"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/kerberos-io/joy4/av"
"github.com/pion/rtp"
"github.com/zaf/g711"
)
func GetBackChannelAudioCodec(streams []av.CodecData, communication *models.Communication) av.AudioCodecData {
for _, stream := range streams {
if stream.Type().IsAudio() {
if stream.Type().String() == "PCM_MULAW" {
pcmuCodec := stream.(av.AudioCodecData)
if pcmuCodec.IsBackChannel() {
communication.HasBackChannel = true
return pcmuCodec
}
}
}
}
return nil
}
func WriteAudioToBackchannel(communication *models.Communication, rtspClient capture.RTSPClient) {
log.Log.Info("Audio.WriteAudioToBackchannel(): writing to backchannel audio codec")
length := uint32(0)
sequenceNumber := uint16(0)
for audio := range communication.HandleAudio {
// Encode PCM to MULAW
var bufferUlaw []byte
for _, v := range audio.Data {
b := g711.EncodeUlawFrame(v)
bufferUlaw = append(bufferUlaw, b)
}
pkt := packets.Packet{
Packet: &rtp.Packet{
Header: rtp.Header{
Version: 2,
Marker: true, // should be true
PayloadType: 0, //packet.PayloadType, // will be owerwriten
SequenceNumber: sequenceNumber,
Timestamp: uint32(length),
SSRC: 1293847657,
},
Payload: bufferUlaw,
},
}
err := rtspClient.WritePacket(pkt)
if err != nil {
log.Log.Error("Audio.WriteAudioToBackchannel(): error writing packet to backchannel")
}
length = (length + uint32(len(bufferUlaw))) % 65536
sequenceNumber = (sequenceNumber + 1) % 65535
time.Sleep(128 * time.Millisecond)
}
log.Log.Info("Audio.WriteAudioToBackchannel(): finished")
}
func WriteFileToBackChannel(infile av.DemuxCloser) {
// Do the warmup!
file, err := os.Open("./audiofile.bye")
if err != nil {
fmt.Println("WriteFileToBackChannel: error opening audiofile.bye file")
}
defer file.Close()
// Read file into buffer
reader := bufio.NewReader(file)
buffer := make([]byte, 1024)
count := 0
for {
_, err := reader.Read(buffer)
if err != nil {
break
}
// Send to backchannel
infile.Write(buffer, 2, uint32(count))
count = count + 1024
time.Sleep(128 * time.Millisecond)
}
}

View File

@@ -1,31 +1,27 @@
package computervision
import (
"bufio"
"bytes"
"encoding/base64"
"image"
"image/jpeg"
"sync"
"time"
mqtt "github.com/eclipse/paho.mqtt.golang"
geo "github.com/kellydunn/golang-geo"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/conditions"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/joy4/av"
"github.com/kerberos-io/joy4/av/pubsub"
"github.com/kerberos-io/joy4/cgo/ffmpeg"
"github.com/kerberos-io/agent/machinery/src/packets"
)
func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, decoder *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) { //, wg *sync.WaitGroup) {
func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, rtspClient capture.RTSPClient) {
log.Log.Debug("ProcessMotion: started")
log.Log.Debug("computervision.main.ProcessMotion(): start motion detection")
config := configuration.Config
loc, _ := time.LoadLocation(config.Timezone)
var isPixelChangeThresholdReached = false
var changesToReturn = 0
var motionRectangle models.MotionRectangle
pixelThreshold := config.Capture.PixelChangeThreshold
// Might not be set in the config file, so set it to 150
@@ -35,33 +31,30 @@ func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Confi
if config.Capture.Continuous == "true" {
log.Log.Info("ProcessMotion: Continuous recording, so no motion detection.")
log.Log.Info("computervision.main.ProcessMotion(): you've enabled continuous recording, so no motion detection required.")
} else {
log.Log.Info("ProcessMotion: Motion detection enabled.")
log.Log.Info("computervision.main.ProcessMotion(): motion detected is enabled, so starting the motion detection.")
hubKey := config.HubKey
deviceKey := config.Key
// Allocate a VideoFrame
frame := ffmpeg.AllocVideoFrame()
// Initialise first 2 elements
var imageArray [3]*image.Gray
j := 0
var cursorError error
var pkt av.Packet
var pkt packets.Packet
for cursorError == nil {
pkt, cursorError = motionCursor.ReadPacket()
// Check If valid package.
if len(pkt.Data) > 0 && pkt.IsKeyFrame {
grayImage, err := GetGrayImage(frame, pkt, decoder, decoderMutex)
grayImage, err := rtspClient.DecodePacketRaw(pkt)
if err == nil {
imageArray[j] = grayImage
imageArray[j] = &grayImage
j++
}
}
@@ -70,34 +63,51 @@ func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Confi
}
}
img := imageArray[0]
if img != nil {
// Calculate mask
var polyObjects []geo.Polygon
if config.Region != nil {
for _, polygon := range config.Region.Polygon {
coords := polygon.Coordinates
poly := geo.Polygon{}
for _, c := range coords {
x := c.X
y := c.Y
p := geo.NewPoint(x, y)
if !poly.Contains(p) {
poly.Add(p)
}
}
polyObjects = append(polyObjects, poly)
}
// A user might have set the base width and height for the IPCamera.
// This means also the polygon coordinates are set to a specific width and height (which might be different than the actual packets
// received from the IPCamera). So we will resize the polygon coordinates to the base width and height.
baseWidthRatio := 1.0
baseHeightRatio := 1.0
baseWidth := config.Capture.IPCamera.BaseWidth
baseHeight := config.Capture.IPCamera.BaseHeight
if baseWidth > 0 && baseHeight > 0 {
// We'll get the first image to calculate the ratio
img := imageArray[0]
if img != nil {
bounds := img.Bounds()
rows := bounds.Dy()
cols := bounds.Dx()
baseWidthRatio = float64(cols) / float64(baseWidth)
baseHeightRatio = float64(rows) / float64(baseHeight)
}
}
// Calculate mask
var polyObjects []geo.Polygon
if config.Region != nil {
for _, polygon := range config.Region.Polygon {
coords := polygon.Coordinates
poly := geo.Polygon{}
for _, c := range coords {
x := c.X * baseWidthRatio
y := c.Y * baseHeightRatio
p := geo.NewPoint(x, y)
if !poly.Contains(p) {
poly.Add(p)
}
}
polyObjects = append(polyObjects, poly)
}
}
img := imageArray[0]
var coordinatesToCheck []int
if img != nil {
bounds := img.Bounds()
rows := bounds.Dy()
cols := bounds.Dx()
// Make fixed size array of uinty8
var coordinatesToCheck []int
for y := 0; y < rows; y++ {
for x := 0; x < cols; x++ {
for _, poly := range polyObjects {
@@ -108,10 +118,13 @@ func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Confi
}
}
}
}
// If no region is set, we'll skip the motion detection
if len(coordinatesToCheck) > 0 {
// Start the motion detection
i := 0
loc, _ := time.LoadLocation(config.Timezone)
for cursorError == nil {
pkt, cursorError = motionCursor.ReadPacket()
@@ -121,67 +134,59 @@ func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Confi
continue
}
grayImage, err := GetGrayImage(frame, pkt, decoder, decoderMutex)
grayImage, err := rtspClient.DecodePacketRaw(pkt)
if err == nil {
imageArray[2] = grayImage
imageArray[2] = &grayImage
}
// Store snapshots (jpg) for hull.
if config.Capture.Snapshots != "false" {
StoreSnapshot(communication, frame, pkt, decoder, decoderMutex)
}
// Check if within time interval
detectMotion := true
timeEnabled := config.Time
if timeEnabled != "false" {
now := time.Now().In(loc)
weekday := now.Weekday()
hour := now.Hour()
minute := now.Minute()
second := now.Second()
if config.Timetable != nil && len(config.Timetable) > 0 {
timeInterval := config.Timetable[int(weekday)]
if timeInterval != nil {
start1 := timeInterval.Start1
end1 := timeInterval.End1
start2 := timeInterval.Start2
end2 := timeInterval.End2
currentTimeInSeconds := hour*60*60 + minute*60 + second
if (currentTimeInSeconds >= start1 && currentTimeInSeconds <= end1) ||
(currentTimeInSeconds >= start2 && currentTimeInSeconds <= end2) {
} else {
detectMotion = false
log.Log.Info("ProcessMotion: Time interval not valid, disabling motion detection.")
}
}
}
// We might have different conditions enabled such as time window or uri response.
// We'll validate those conditions and if not valid we'll not do anything.
detectMotion, err := conditions.Validate(loc, configuration)
if !detectMotion && err != nil {
log.Log.Debug("computervision.main.ProcessMotion(): " + err.Error() + ".")
}
if config.Capture.Motion != "false" {
// Remember additional information about the result of findmotion
isPixelChangeThresholdReached, changesToReturn = FindMotion(imageArray, coordinatesToCheck, pixelThreshold)
if detectMotion && isPixelChangeThresholdReached {
if detectMotion {
// If offline mode is disabled, send a message to the hub
if config.Offline != "true" {
if mqttClient != nil {
if hubKey != "" {
mqttClient.Publish("kerberos/"+hubKey+"/device/"+deviceKey+"/motion", 2, false, "motion")
} else {
mqttClient.Publish("kerberos/device/"+deviceKey+"/motion", 2, false, "motion")
// Remember additional information about the result of findmotion
isPixelChangeThresholdReached, changesToReturn, motionRectangle = FindMotion(imageArray, coordinatesToCheck, pixelThreshold)
if isPixelChangeThresholdReached {
// If offline mode is disabled, send a message to the hub
if config.Offline != "true" {
if mqttClient != nil {
if hubKey != "" {
message := models.Message{
Payload: models.Payload{
Action: "motion",
DeviceId: configuration.Config.Key,
Value: map[string]interface{}{
"timestamp": time.Now().Unix(),
},
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
} else {
log.Log.Info("computervision.main.ProcessMotion(): failed to package MQTT message: " + err.Error())
}
} else {
mqttClient.Publish("kerberos/agent/"+deviceKey, 2, false, "motion")
}
}
}
}
if config.Capture.Recording != "false" {
dataToPass := models.MotionDataPartial{
Timestamp: time.Now().Unix(),
NumberOfChanges: changesToReturn,
if config.Capture.Recording != "false" {
dataToPass := models.MotionDataPartial{
Timestamp: time.Now().Unix(),
NumberOfChanges: changesToReturn,
Rectangle: motionRectangle,
}
communication.HandleMotion <- dataToPass //Save data to the channel
}
communication.HandleMotion <- dataToPass //Save data to the channel
}
}
@@ -195,67 +200,63 @@ func ProcessMotion(motionCursor *pubsub.QueueCursor, configuration *models.Confi
img = nil
}
}
frame.Free()
}
log.Log.Debug("ProcessMotion: finished")
log.Log.Debug("computervision.main.ProcessMotion(): stop the motion detection.")
}
func FindMotion(imageArray [3]*image.Gray, coordinatesToCheck []int, pixelChangeThreshold int) (thresholdReached bool, changesDetected int) {
func FindMotion(imageArray [3]*image.Gray, coordinatesToCheck []int, pixelChangeThreshold int) (thresholdReached bool, changesDetected int, motionRectangle models.MotionRectangle) {
image1 := imageArray[0]
image2 := imageArray[1]
image3 := imageArray[2]
threshold := 60
changes := AbsDiffBitwiseAndThreshold(image1, image2, image3, threshold, coordinatesToCheck)
return changes > pixelChangeThreshold, changes
changes, motionRectangle := AbsDiffBitwiseAndThreshold(image1, image2, image3, threshold, coordinatesToCheck)
return changes > pixelChangeThreshold, changes, motionRectangle
}
func GetGrayImage(frame *ffmpeg.VideoFrame, pkt av.Packet, dec *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) (*image.Gray, error) {
_, err := capture.DecodeImage(frame, pkt, dec, decoderMutex)
// Do a deep copy of the image
imgDeepCopy := image.NewGray(frame.ImageGray.Bounds())
imgDeepCopy.Stride = frame.ImageGray.Stride
copy(imgDeepCopy.Pix, frame.ImageGray.Pix)
return imgDeepCopy, err
}
func GetRawImage(frame *ffmpeg.VideoFrame, pkt av.Packet, dec *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) (*ffmpeg.VideoFrame, error) {
_, err := capture.DecodeImage(frame, pkt, dec, decoderMutex)
return frame, err
}
func ImageToBytes(img image.Image) ([]byte, error) {
buffer := new(bytes.Buffer)
w := bufio.NewWriter(buffer)
err := jpeg.Encode(w, img, &jpeg.Options{Quality: 15})
return buffer.Bytes(), err
}
func AbsDiffBitwiseAndThreshold(img1 *image.Gray, img2 *image.Gray, img3 *image.Gray, threshold int, coordinatesToCheck []int) int {
func AbsDiffBitwiseAndThreshold(img1 *image.Gray, img2 *image.Gray, img3 *image.Gray, threshold int, coordinatesToCheck []int) (int, models.MotionRectangle) {
changes := 0
var pixelList [][]int
for i := 0; i < len(coordinatesToCheck); i++ {
pixel := coordinatesToCheck[i]
diff := int(img3.Pix[pixel]) - int(img1.Pix[pixel])
diff2 := int(img3.Pix[pixel]) - int(img2.Pix[pixel])
if (diff > threshold || diff < -threshold) && (diff2 > threshold || diff2 < -threshold) {
changes++
// Store the pixel coordinates where the change is detected
pixelList = append(pixelList, []int{pixel % img1.Bounds().Dx(), pixel / img1.Bounds().Dx()})
}
}
return changes
}
func StoreSnapshot(communication *models.Communication, frame *ffmpeg.VideoFrame, pkt av.Packet, decoder *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) {
rgbImage, err := GetRawImage(frame, pkt, decoder, decoderMutex)
if err == nil {
buffer := new(bytes.Buffer)
w := bufio.NewWriter(buffer)
err := jpeg.Encode(w, &rgbImage.Image, &jpeg.Options{Quality: 15})
if err == nil {
snapshot := base64.StdEncoding.EncodeToString(buffer.Bytes())
communication.Image = snapshot
// Calculate rectangle of pixelList (startX, startY, endX, endY)
var motionRectangle models.MotionRectangle
if len(pixelList) > 0 {
startX := pixelList[0][0]
startY := pixelList[0][1]
endX := startX
endY := startY
for _, pixel := range pixelList {
if pixel[0] < startX {
startX = pixel[0]
}
if pixel[1] < startY {
startY = pixel[1]
}
if pixel[0] > endX {
endX = pixel[0]
}
if pixel[1] > endY {
endY = pixel[1]
}
}
log.Log.Debugf("Rectangle of changes detected: startX: %d, startY: %d, endX: %d, endY: %d", startX, startY, endX, endY)
motionRectangle = models.MotionRectangle{
X: startX,
Y: startY,
Width: endX - startX,
Height: endY - startY,
}
log.Log.Debugf("Motion rectangle: %+v", motionRectangle)
}
return changes, motionRectangle
}

View File

@@ -0,0 +1,28 @@
package conditions
import (
"errors"
"time"
"github.com/kerberos-io/agent/machinery/src/models"
)
func Validate(loc *time.Location, configuration *models.Configuration) (valid bool, err error) {
valid = true
err = nil
withinTimeInterval := IsWithinTimeInterval(loc, configuration)
if !withinTimeInterval {
valid = false
err = errors.New("time interval not valid")
return
}
validUriResponse := IsValidUriResponse(configuration)
if !validUriResponse {
valid = false
err = errors.New("uri response not valid")
return
}
return
}

View File

@@ -0,0 +1,39 @@
package conditions
import (
"time"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
func IsWithinTimeInterval(loc *time.Location, configuration *models.Configuration) (enabled bool) {
config := configuration.Config
timeEnabled := config.Time
enabled = true
if timeEnabled != "false" {
now := time.Now().In(loc)
weekday := now.Weekday()
hour := now.Hour()
minute := now.Minute()
second := now.Second()
if config.Timetable != nil && len(config.Timetable) > 0 {
timeInterval := config.Timetable[int(weekday)]
if timeInterval != nil {
start1 := timeInterval.Start1
end1 := timeInterval.End1
start2 := timeInterval.Start2
end2 := timeInterval.End2
currentTimeInSeconds := hour*60*60 + minute*60 + second
if (currentTimeInSeconds >= start1 && currentTimeInSeconds <= end1) ||
(currentTimeInSeconds >= start2 && currentTimeInSeconds <= end2) {
log.Log.Debug("conditions.timewindow.IsWithinTimeInterval(): time interval valid, enabling recording.")
} else {
log.Log.Info("conditions.timewindow.IsWithinTimeInterval(): time interval not valid, disabling recording.")
enabled = false
}
}
}
}
return
}

View File

@@ -0,0 +1,59 @@
package conditions
import (
"bytes"
"crypto/tls"
"fmt"
"net/http"
"os"
"time"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
func IsValidUriResponse(configuration *models.Configuration) (enabled bool) {
config := configuration.Config
conditionURI := config.ConditionURI
enabled = true
if conditionURI != "" {
// We will send a POST request to the conditionURI, and expect a 200 response.
// In the payload we will send some information, so the other end can decide
// if it should enable or disable recording.
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
var object = fmt.Sprintf(`{
"camera_id" : "%s",
"camera_name" : "%s",
"site_id" : "%s",
"hub_key" : "%s",
"timestamp" : "%s",
}`, config.Key, config.FriendlyName, config.HubSite, config.HubKey, time.Now().Format("2006-01-02 15:04:05"))
var jsonStr = []byte(object)
buffy := bytes.NewBuffer(jsonStr)
req, _ := http.NewRequest("POST", conditionURI, buffy)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if resp != nil {
resp.Body.Close()
}
if err == nil && resp.StatusCode == 200 {
log.Log.Info("conditions.uri.IsValidUriResponse(): response 200, enabling recording.")
} else {
log.Log.Info("conditions.uri.IsValidUriResponse(): response not 200, disabling recording.")
enabled = false
}
}
return
}

View File

@@ -1,14 +1,12 @@
package components
package config
import (
"context"
"encoding/json"
"errors"
"image"
"io/ioutil"
"os"
"reflect"
"sort"
"strconv"
"strings"
"time"
@@ -20,25 +18,6 @@ import (
"go.mongodb.org/mongo-driver/bson"
)
func GetImageFromFilePath(configDirectory string) (image.Image, error) {
snapshotDirectory := configDirectory + "/data/snapshots"
files, err := ioutil.ReadDir(snapshotDirectory)
if err == nil && len(files) > 1 {
sort.Slice(files, func(i, j int) bool {
return files[i].ModTime().Before(files[j].ModTime())
})
filePath := configDirectory + "/data/snapshots/" + files[1].Name()
f, err := os.Open(filePath)
if err != nil {
return nil, err
}
defer f.Close()
image, _, err := image.Decode(f)
return image, err
}
return nil, errors.New("Could not find a snapshot in " + snapshotDirectory)
}
// ReadUserConfig Reads the user configuration of the Kerberos Open Source instance.
// This will return a models.User struct including the username, password,
// selected language, and if the installation was completed or not.
@@ -80,7 +59,7 @@ func OpenConfig(configDirectory string, configuration *models.Configuration) {
// Write to mongodb
client := database.New()
db := client.Database(database.DatabaseName)
db := client.Client.Database(database.DatabaseName)
collection := db.Collection("configuration")
var globalConfig models.Config
@@ -141,8 +120,13 @@ func OpenConfig(configDirectory string, configuration *models.Configuration) {
},
)
// Merge Config toplevel
// Reset main configuration Config.
configuration.Config = models.Config{}
// Merge the global settings in the main config
conjungo.Merge(&configuration.Config, configuration.GlobalConfig, opts)
// Now we might override some settings with the custom config
conjungo.Merge(&configuration.Config, configuration.CustomConfig, opts)
// Merge Kerberos Vault settings
@@ -151,12 +135,27 @@ func OpenConfig(configDirectory string, configuration *models.Configuration) {
conjungo.Merge(&kerberosvault, configuration.CustomConfig.KStorage, opts)
configuration.Config.KStorage = &kerberosvault
// Merge Secondary Kerberos Vault settings
var kerberosvaultSecondary models.KStorage
conjungo.Merge(&kerberosvaultSecondary, configuration.GlobalConfig.KStorageSecondary, opts)
conjungo.Merge(&kerberosvaultSecondary, configuration.CustomConfig.KStorageSecondary, opts)
configuration.Config.KStorageSecondary = &kerberosvaultSecondary
// Merge Kerberos S3 settings
var s3 models.S3
conjungo.Merge(&s3, configuration.GlobalConfig.S3, opts)
conjungo.Merge(&s3, configuration.CustomConfig.S3, opts)
configuration.Config.S3 = &s3
// Merge Encryption settings
var encryption models.Encryption
conjungo.Merge(&encryption, configuration.GlobalConfig.Encryption, opts)
conjungo.Merge(&encryption, configuration.CustomConfig.Encryption, opts)
configuration.Config.Encryption = &encryption
// Merge timetable manually because it's an array
configuration.Config.Timetable = configuration.CustomConfig.Timetable
// Cleanup
opts = nil
@@ -190,15 +189,19 @@ func OpenConfig(configDirectory string, configuration *models.Configuration) {
}
jsonFile.Close()
}
}
return
}
// This function will override the configuration with environment variables.
func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
environmentVariables := os.Environ()
// Initialize the configuration for some new fields.
if configuration.Config.KStorageSecondary == nil {
configuration.Config.KStorageSecondary = &models.KStorage{}
}
for _, env := range environmentVariables {
if strings.Contains(env, "AGENT_") {
key := strings.Split(env, "=")[0]
@@ -210,7 +213,7 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.Key = value
break
case "AGENT_NAME":
configuration.Config.Name = value
configuration.Config.FriendlyName = value
break
case "AGENT_TIMEZONE":
configuration.Config.Timezone = value
@@ -236,7 +239,15 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.Capture.IPCamera.SubRTSP = value
break
/* ONVIF connnection settings */
/* Base width and height for the liveview and motion regions */
case "AGENT_CAPTURE_IPCAMERA_BASE_WIDTH":
configuration.Config.Capture.IPCamera.BaseWidth, _ = strconv.Atoi(value)
break
case "AGENT_CAPTURE_IPCAMERA_BASE_HEIGHT":
configuration.Config.Capture.IPCamera.BaseHeight, _ = strconv.Atoi(value)
break
/* ONVIF connnection settings */
case "AGENT_CAPTURE_IPCAMERA_ONVIF":
configuration.Config.Capture.IPCamera.ONVIF = value
break
@@ -389,10 +400,26 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.MQTTPassword = value
break
/* MQTT chunking of low-resolution images into multiple messages */
case "AGENT_CAPTURE_LIVEVIEW_CHUNKING":
configuration.Config.Capture.LiveviewChunking = value
break
/* Real-time streaming of keyframes to a MQTT topic */
case "AGENT_REALTIME_PROCESSING":
configuration.Config.RealtimeProcessing = value
break
case "AGENT_REALTIME_PROCESSING_TOPIC":
configuration.Config.RealtimeProcessingTopic = value
break
/* WebRTC settings for live-streaming (remote) */
case "AGENT_STUN_URI":
configuration.Config.STUNURI = value
break
case "AGENT_FORCE_TURN":
configuration.Config.ForceTurn = value
break
case "AGENT_TURN_URI":
configuration.Config.TURNURI = value
break
@@ -413,6 +440,9 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
break
/* When connected and storing in Kerberos Hub (SAAS) */
case "AGENT_HUB_ENCRYPTION":
configuration.Config.HubEncryption = value
break
case "AGENT_HUB_URI":
configuration.Config.HubURI = value
break
@@ -429,7 +459,7 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.S3.Region = value
break
/* When storing in a Kerberos Vault */
/* When storing in a Vault */
case "AGENT_KERBEROSVAULT_URI":
configuration.Config.KStorage.URI = value
break
@@ -446,6 +476,37 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.KStorage.Directory = value
break
/* Retry policy and timeout */
case "AGENT_KERBEROSVAULT_MAX_RETRIES":
maxRetries, err := strconv.Atoi(value)
if err == nil {
configuration.Config.KStorage.MaxRetries = maxRetries
}
break
case "AGENT_KERBEROSVAULT_TIMEOUT":
timeout, err := strconv.Atoi(value)
if err == nil {
configuration.Config.KStorage.Timeout = timeout
}
break
/* When storing in a secondary Vault */
case "AGENT_KERBEROSVAULT_SECONDARY_URI":
configuration.Config.KStorageSecondary.URI = value
break
case "AGENT_KERBEROSVAULT_SECONDARY_ACCESS_KEY":
configuration.Config.KStorageSecondary.AccessKey = value
break
case "AGENT_KERBEROSVAULT_SECONDARY_SECRET_KEY":
configuration.Config.KStorageSecondary.SecretAccessKey = value
break
case "AGENT_KERBEROSVAULT_SECONDARY_PROVIDER":
configuration.Config.KStorageSecondary.Provider = value
break
case "AGENT_KERBEROSVAULT_SECONDARY_DIRECTORY":
configuration.Config.KStorageSecondary.Directory = value
break
/* When storing in dropbox */
case "AGENT_DROPBOX_ACCESS_TOKEN":
configuration.Config.Dropbox.AccessToken = value
@@ -453,9 +514,44 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
case "AGENT_DROPBOX_DIRECTORY":
configuration.Config.Dropbox.Directory = value
break
/* When encryption is enabled */
case "AGENT_ENCRYPTION":
configuration.Config.Encryption.Enabled = value
break
case "AGENT_ENCRYPTION_RECORDINGS":
configuration.Config.Encryption.Recordings = value
break
case "AGENT_ENCRYPTION_FINGERPRINT":
configuration.Config.Encryption.Fingerprint = value
break
case "AGENT_ENCRYPTION_PRIVATE_KEY":
encryptionPrivateKey := strings.ReplaceAll(value, "\\n", "\n")
configuration.Config.Encryption.PrivateKey = encryptionPrivateKey
break
case "AGENT_ENCRYPTION_SYMMETRIC_KEY":
configuration.Config.Encryption.SymmetricKey = value
break
/* When signing is enabled */
case "AGENT_SIGNING":
configuration.Config.Signing.Enabled = value
break
case "AGENT_SIGNING_PRIVATE_KEY":
signingPrivateKey := strings.ReplaceAll(value, "\\n", "\n")
configuration.Config.Signing.PrivateKey = signingPrivateKey
break
}
}
}
// Signing is a new feature, so if empty we set default values.
if configuration.Config.Signing == nil || configuration.Config.Signing.PrivateKey == "" {
configuration.Config.Signing = &models.Signing{
Enabled: "true",
PrivateKey: "-----BEGIN PRIVATE KEY-----\nMIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQDoSxjyw08lRxF4Yoqmcaewjq3XjB55dMy4tlN5MGLdr8aAPuNR9Mwh3jlh1bDpwQXNgZkHDV/q9bpdPGGi7SQo2xw+rDuo5Y1f3wdzz+iuCTPbzoGFalE+1PZlU5TEtUtlbt7MRc4pxTaLP3u0P3EtW3KnzcUarcJWZJYxzv7gqVNCA/47BN+1ptqjwz3LAlah5yaftEvVjkaANOsafUswbS4VT44XfSlbKgebORCKDuNgQiyhuV5gU+J0TOaqRWwwMAWV0UoScyJLfhHRBCrUwrCUTwqH9jfkB7pgRFsYoZJd4MKMeHJjFSum+QXCBqInSnwu8c2kJChiLMWqJ+mhpTdfUAmSkeUSStfbbcavIPbDABvMgzOcmYMIVXXe57twU0xdu3AqWLtc9kw1BkUgZblM9pSSpYrIDheEyMs2/hiLgXsIaM0nVQtqwrA7rbeEGuPblzA6hvHgwN9K6HaBqdlGSlpYZ0v3SWIMwmxRB+kIojlyuggm8Qa4mqL97GFDGl6gOBGlNUFTBUVEa3EaJ7NJpGobRGsh/9dXzcW4aYmT9WxlzTlIKksI1ro6KdRfuVWfEs4AnG8bVEJmofK8EUrueB9IdXlcJZB49xolnOZPFohtMe/0U7evQOQP3sZnX+KotCsE7OXJvL09oF58JKoqmK9lPp0+pFBU4g6NjQIDAQABAoICAA+RSWph1t+q5R3nxUxFTYMrhv5IjQe2mDxJpF3B409zolC9OHxgGUisobTY3pBqs0DtKbxUeH2A0ehUH/axEosWHcz3cmIbgxHE9kdlJ9B3Lmss6j/uw+PWutu1sgm5phaIFIvuNNRWhPB6yXUwU4sLRat1+Z9vTmIQiKdtLIrtJz/n2VDvrJxn1N+yAsE20fnrksFKyZuxVsJaZPiX/t5Yv1/z0LjFjVoL7GUA5/Si7csN4ftqEhUrkNr2BvcZlTyffrF4lZCXrtl76RNUaxhqIu3H0gFbV2UfBpuckkfAhNRpXJ4iFSxm4nQbk4ojV8+l21RFOBeDN2Z7Ocu6auP5MnzpopR66vmDCmPoid498VGgDzFQEVkOar8WAa4v9h85QgLKrth6FunmaWJUT6OggQD3yY58GSwp5+ARMETMBP2x6Eld+PGgqoJvPT1+l/e9gOw7/SJ+Wz6hRXZAm/eiXMppHtB7sfea5rscNanPjJkK9NvPM0MX9cq/iA6QjXuETkMbubjo+Cxk3ydZiIQmWQDAx/OgxTyHbeRCVhLPcAphX0clykCuHZpI9Mvvj643/LoE0mjTByWJXf/WuGJA8ElHkjSdokVJ7jumz8OZZHfq0+V7+la2opsObeQANHW5MLWrnHlRVzTGV0IRZDXh7h1ptUJ4ubdvw/GJ2NeTAoIBAQD0lXXdjYKWC4uZ4YlgydP8b1CGda9cBV5RcPt7q9Ya1R2E4ieYyohmzltopvdaOXdsTZzhtdzOzKF+2qNcbBKhBTleYZ8GN5RKbo7HwXWpzfCTjseKHOD/QPwvBKXzLVWNtXn1NrLR79Rv0wbkYF6DtoqpEPf5kMs4bx79yW+mz8FUgdEeMjKphx6Jd5RYlTUxS64K6bnK7gjHNCF2cwdxsh4B6EB649GKeNz4JXi+oQBmOcX5ncXnkJrbju+IjtCkQ40HINVNdX7XeEaaw6KGaImVjw61toPUuDaioYUojufayoyXaUJnDbHQ2tNekEpq5iwnenZCbUKWmSeRe7dLAoIBAQDzIscYujsrmPxiTj2prhG0v36NRNP99mShnnJGowiIs+UBS0EMdOmBFa2sC9uFs/VnreQNYPDJdfr7O5VK9kfbH/PSiiKJ+wVebfdAlWkJYH27JN2Kl2l/OsvRVelNvF3BWIYF46qzGxIM0axaz3T2ZAJ9SrUgeAYhak6uyM4fbexEWXxDgPGu6C0jB6IAzmHJnnh+j5+4ZXqjVyUxBYtUsWXF/TXomVcT9jxj7aUmS2/Us0XTVOVNpALqqYcekrzsX/wX0OEi5HkivYXHcNaDHx3NuUf6KdYof5DwPUM76qe+5/kWlSIHP3M6rIFK3pYFUnkHn2E8jNWcO97Aio+HAoIBAA+bcff/TbPxbKkXIUMR3fsfx02tONFwbkJYKVQM9Q6lRsrx+4Dee7HDvUWCUgpp3FsG4NnuVvbDTBLiNMZzBwVLZgvFwvYMmePeBjJs/+sj/xQLamQ/z4O6S91cOJK589mlGPEy2lpXKYExQCFWnPFetp5vPMOqH62sOZgMQJmubDHOTt/UaDM1Mhenj8nPS6OnpqV/oKF4awr7Ip+CW5k/unZ4sZSl8PsbF06mZXwUngfn6+Av1y8dpSQZjONz6ZBx1w/7YmEc/EkXnbnGfhqBlTX7+P5TdTofvyzFjc+2vsjRYANRbjFRSGWBcTd5kaYcpfim8eDvQ+6EO2gnMt0CggEAH2ln1Y8B5AEQ4lZ/avOdP//ZhsDUrqPtnl/NHckkahzrwj4JumVEYbP+SxMBGoYEd4+kvgG/OhfvBBRPlm65G9tF8fZ8vdzbdba5UfO7rUV1GP+LS8OCErjy6imySaPDbR5Vul8Oh7NAor1YCidxUf/bvnovanF3QUvtvHEfCDp4YuA4yLPZBaLjaforePUw9w5tPNSravRZYs74dBvmQ1vj7S9ojpN5B5AxfyuNwaPPX+iFZec69MvywISEe3Ozysof1Kfc3lgsOkvIA9tVK32SqSh93xkWnQbWH+OaUxxe7bAko0FDMzKEXZk53wVg1nEwR8bUljEPy+6EOdXs8wKCAQEAsEOWYMY5m7HkeG2XTTvX7ECmmdGl/c4ZDVwzB4IPxqUG7XfLmtsON8YoKOEUpJoc4ANafLXzmU+esUGbH4Ph22IWgP9jzws7jxaN/Zoku64qrSjgEZFTRIpKyhFk/ImWbS9laBW4l+m0tqTTRqoE0QEJf/2uv/04q65zrA70X9z2+KTrAtqOiRQPWl/IxRe9U4OEeGL+oD+YlXKCDsnJ3rwUIOZgJx0HWZg7K35DKwqs1nVi56FBdljiTRKAjVLRedjgDCSfGS1yUZ3krHzpaPt1qgnT3rdtYcIdbYDr66V2/gEEaz6XMGHuTk/ewjzUJxq9UTVeXOCbkRPXgVJg1w==\n-----END PRIVATE KEY-----",
}
}
}
func SaveConfig(configDirectory string, config models.Config, configuration *models.Configuration, communication *models.Communication) error {
@@ -471,7 +567,9 @@ func SaveConfig(configDirectory string, config models.Config, configuration *mod
if communication.CameraConnected {
select {
case communication.HandleBootstrap <- "restart":
default:
log.Log.Info("config.main.SaveConfig(): update config, restart agent.")
case <-time.After(1 * time.Second):
log.Log.Info("config.main.SaveConfig(): update config, restart agent.")
}
}
@@ -484,12 +582,25 @@ func SaveConfig(configDirectory string, config models.Config, configuration *mod
}
func StoreConfig(configDirectory string, config models.Config) error {
// Encryption key can be set wrong.
if config.Encryption != nil {
encryptionPrivateKey := config.Encryption.PrivateKey
// Replace \\n by \n
encryptionPrivateKey = strings.ReplaceAll(encryptionPrivateKey, "\\n", "\n")
config.Encryption.PrivateKey = encryptionPrivateKey
}
// Reset the basewidth and baseheight
config.Capture.IPCamera.BaseWidth = 0
config.Capture.IPCamera.BaseHeight = 0
// Save into database
if os.Getenv("DEPLOYMENT") == "factory" || os.Getenv("MACHINERY_ENVIRONMENT") == "kubernetes" {
// Write to mongodb
client := database.New()
db := client.Database(database.DatabaseName)
db := client.Client.Database(database.DatabaseName)
collection := db.Collection("configuration")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

View File

@@ -15,12 +15,19 @@ type DB struct {
Client *mongo.Client
}
var TIMEOUT = 10 * time.Second
var _init_ctx sync.Once
var _instance *DB
var DatabaseName = "KerberosFactory"
func New() *mongo.Client {
var DatabaseName = os.Getenv("MONGODB_DATABASE_FACTORY")
func New() *DB {
if DatabaseName == "" {
DatabaseName = "KerberosFactory"
}
mongodbURI := os.Getenv("MONGODB_URI")
host := os.Getenv("MONGODB_HOST")
databaseCredentials := os.Getenv("MONGODB_DATABASE_CREDENTIALS")
replicaset := os.Getenv("MONGODB_REPLICASET")
@@ -28,28 +35,46 @@ func New() *mongo.Client {
password := os.Getenv("MONGODB_PASSWORD")
authentication := "SCRAM-SHA-256"
ctx, cancel := context.WithTimeout(context.Background(), TIMEOUT)
defer cancel()
_init_ctx.Do(func() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
_instance = new(DB)
mongodbURI := fmt.Sprintf("mongodb://%s:%s@%s", username, password, host)
if replicaset != "" {
mongodbURI = fmt.Sprintf("%s/?replicaSet=%s", mongodbURI, replicaset)
}
client, err := mongo.Connect(ctx, options.Client().ApplyURI(mongodbURI).SetAuth(options.Credential{
AuthMechanism: authentication,
AuthSource: databaseCredentials,
Username: username,
Password: password,
}))
if err != nil {
fmt.Printf("Error setting up mongodb connection: %+v\n", err)
os.Exit(1)
// We can also apply the complete URI
// e.g. "mongodb+srv://<username>:<password>@kerberos-hub.shhng.mongodb.net/?retryWrites=true&w=majority&appName=kerberos-hub"
if mongodbURI != "" {
serverAPI := options.ServerAPI(options.ServerAPIVersion1)
opts := options.Client().ApplyURI(mongodbURI).SetServerAPIOptions(serverAPI)
// Create a new client and connect to the server
client, err := mongo.Connect(ctx, opts)
if err != nil {
fmt.Printf("Error setting up mongodb connection: %+v\n", err)
os.Exit(1)
}
_instance.Client = client
} else {
// New MongoDB driver
mongodbURI := fmt.Sprintf("mongodb://%s:%s@%s", username, password, host)
if replicaset != "" {
mongodbURI = fmt.Sprintf("%s/?replicaSet=%s", mongodbURI, replicaset)
}
client, err := mongo.Connect(ctx, options.Client().ApplyURI(mongodbURI).SetAuth(options.Credential{
AuthMechanism: authentication,
AuthSource: databaseCredentials,
Username: username,
Password: password,
}))
if err != nil {
fmt.Printf("Error setting up mongodb connection: %+v\n", err)
os.Exit(1)
}
_instance.Client = client
}
_instance.Client = client
})
return _instance.Client
return _instance
}

View File

@@ -0,0 +1,126 @@
package encryption
import (
"bytes"
"crypto"
"crypto/aes"
"crypto/cipher"
"crypto/md5"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"encoding/base64"
"errors"
"hash"
)
func DecryptWithPrivateKey(ciphertext string, privateKey *rsa.PrivateKey) ([]byte, error) {
cipheredValue, _ := base64.StdEncoding.DecodeString(ciphertext)
out, err := rsa.DecryptPKCS1v15(nil, privateKey, cipheredValue)
return out, err
}
func SignWithPrivateKey(data []byte, privateKey *rsa.PrivateKey) ([]byte, error) {
hashed := sha256.Sum256(data)
signature, err := rsa.SignPKCS1v15(nil, privateKey, crypto.SHA256, hashed[:])
return signature, err
}
func AesEncrypt(content []byte, password string) ([]byte, error) {
salt := make([]byte, 8)
_, err := rand.Read(salt)
if err != nil {
return nil, err
}
key, iv, err := DefaultEvpKDF([]byte(password), salt)
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
mode := cipher.NewCBCEncrypter(block, iv)
cipherBytes := PKCS5Padding(content, aes.BlockSize)
mode.CryptBlocks(cipherBytes, cipherBytes)
cipherText := make([]byte, 16+len(cipherBytes))
copy(cipherText[:8], []byte("Salted__"))
copy(cipherText[8:16], salt)
copy(cipherText[16:], cipherBytes)
return cipherText, nil
}
func AesDecrypt(cipherText []byte, password string) ([]byte, error) {
if string(cipherText[:8]) != "Salted__" {
return nil, errors.New("invalid crypto js aes encryption")
}
salt := cipherText[8:16]
cipherBytes := cipherText[16:]
key, iv, err := DefaultEvpKDF([]byte(password), salt)
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
mode := cipher.NewCBCDecrypter(block, iv)
mode.CryptBlocks(cipherBytes, cipherBytes)
result := PKCS5UnPadding(cipherBytes)
return result, nil
}
func EvpKDF(password []byte, salt []byte, keySize int, iterations int, hashAlgorithm string) ([]byte, error) {
var block []byte
var hasher hash.Hash
derivedKeyBytes := make([]byte, 0)
switch hashAlgorithm {
case "md5":
hasher = md5.New()
default:
return []byte{}, errors.New("not implement hasher algorithm")
}
for len(derivedKeyBytes) < keySize*4 {
if len(block) > 0 {
hasher.Write(block)
}
hasher.Write(password)
hasher.Write(salt)
block = hasher.Sum([]byte{})
hasher.Reset()
for i := 1; i < iterations; i++ {
hasher.Write(block)
block = hasher.Sum([]byte{})
hasher.Reset()
}
derivedKeyBytes = append(derivedKeyBytes, block...)
}
return derivedKeyBytes[:keySize*4], nil
}
func DefaultEvpKDF(password []byte, salt []byte) (key []byte, iv []byte, err error) {
keySize := 256 / 32
ivSize := 128 / 32
derivedKeyBytes, err := EvpKDF(password, salt, keySize+ivSize, 1, "md5")
if err != nil {
return []byte{}, []byte{}, err
}
return derivedKeyBytes[:keySize*4], derivedKeyBytes[keySize*4:], nil
}
func PKCS5UnPadding(src []byte) []byte {
length := len(src)
unpadding := int(src[length-1])
return src[:(length - unpadding)]
}
func PKCS5Padding(src []byte, blockSize int) []byte {
padding := blockSize - len(src)%blockSize
padtext := bytes.Repeat([]byte{byte(padding)}, padding)
return append(src, padtext...)
}

View File

@@ -12,7 +12,6 @@ import (
// The logging library being used everywhere.
var Log = Logging{
Logger: "logrus",
Level: "debug",
}
// -----------------
@@ -45,19 +44,44 @@ func ConfigureGoLogging(configDirectory string, timezone *time.Location) {
// This a logrus
// -> github.com/sirupsen/logrus
func ConfigureLogrus(timezone *time.Location) {
// Log as JSON instead of the default ASCII formatter.
logrus.SetFormatter(LocalTimeZoneFormatter{
Timezone: timezone,
Formatter: &logrus.JSONFormatter{},
}) // Use local timezone for providing datetime in logs!
func ConfigureLogrus(level string, output string, timezone *time.Location) {
if output == "json" {
// Log as JSON instead of the default ASCII formatter.
logrus.SetFormatter(LocalTimeZoneFormatter{
Timezone: timezone,
Formatter: &logrus.JSONFormatter{},
})
} else if output == "text" {
// Log as text with colors.
formatter := logrus.TextFormatter{
ForceColors: true,
FullTimestamp: true,
}
logrus.SetFormatter(LocalTimeZoneFormatter{
Timezone: timezone,
Formatter: &formatter,
})
}
// Use local timezone for providing datetime in logs!
// Output to stdout instead of the default stderr
// Can be any io.Writer, see below for File example
logrus.SetOutput(os.Stdout)
// Only log the warning severity or above.
logrus.SetLevel(logrus.InfoLevel)
logLevel := logrus.InfoLevel
if level == "error" {
logLevel = logrus.ErrorLevel
} else if level == "debug" {
logLevel = logrus.DebugLevel
} else if level == "fatal" {
logLevel = logrus.FatalLevel
} else if level == "warning" {
logLevel = logrus.WarnLevel
} // Add this line for logging filename and line number!
logrus.SetLevel(logLevel)
}
type LocalTimeZoneFormatter struct {
@@ -72,15 +96,14 @@ func (u LocalTimeZoneFormatter) Format(e *logrus.Entry) ([]byte, error) {
type Logging struct {
Logger string
Level string
}
func (self *Logging) Init(configDirectory string, timezone *time.Location) {
func (self *Logging) Init(level string, logoutput string, configDirectory string, timezone *time.Location) {
switch self.Logger {
case "go-logging":
ConfigureGoLogging(configDirectory, timezone)
case "logrus":
ConfigureLogrus(timezone)
ConfigureLogrus(level, logoutput, timezone)
default:
}
}
@@ -95,6 +118,16 @@ func (self *Logging) Info(sentence string) {
}
}
func (self *Logging) Infof(format string, args ...interface{}) {
switch self.Logger {
case "go-logging":
gologging.Infof(format, args...)
case "logrus":
logrus.Infof(format, args...)
default:
}
}
func (self *Logging) Warning(sentence string) {
switch self.Logger {
case "go-logging":
@@ -115,6 +148,16 @@ func (self *Logging) Debug(sentence string) {
}
}
func (self *Logging) Debugf(format string, args ...interface{}) {
switch self.Logger {
case "go-logging":
gologging.Debugf(format, args...)
case "logrus":
logrus.Debugf(format, args...)
default:
}
}
func (self *Logging) Error(sentence string) {
switch self.Logger {
case "go-logging":

View File

@@ -0,0 +1,6 @@
package models
type AudioDataPartial struct {
Timestamp int64 `json:"timestamp" bson:"timestamp"`
Data []int16 `json:"data" bson:"data"`
}

View File

@@ -2,11 +2,9 @@ package models
import (
"context"
"sync"
"sync/atomic"
"github.com/kerberos-io/joy4/av/pubsub"
"github.com/kerberos-io/joy4/cgo/ffmpeg"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/tevino/abool"
)
@@ -17,25 +15,27 @@ type Communication struct {
CancelContext *context.CancelFunc
PackageCounter *atomic.Value
LastPacketTimer *atomic.Value
PackageCounterSub *atomic.Value
LastPacketTimerSub *atomic.Value
CloudTimestamp *atomic.Value
HandleBootstrap chan string
HandleStream chan string
HandleSubStream chan string
HandleMotion chan MotionDataPartial
HandleAudio chan AudioDataPartial
HandleUpload chan string
HandleHeartBeat chan string
HandleLiveSD chan int64
HandleLiveHDKeepalive chan string
HandleLiveHDHandshake chan SDPPayload
HandleLiveHDHandshake chan RequestHDStreamPayload
HandleLiveHDPeers chan string
HandleONVIF chan OnvifAction
IsConfiguring *abool.AtomicBool
Queue *pubsub.Queue
SubQueue *pubsub.Queue
DecoderMutex *sync.Mutex
SubDecoderMutex *sync.Mutex
Decoder *ffmpeg.VideoDecoder
SubDecoder *ffmpeg.VideoDecoder
Queue *packets.Queue
SubQueue *packets.Queue
Image string
CameraConnected bool
MainStreamConnected bool
SubStreamConnected bool
HasBackChannel bool
}

View File

@@ -12,36 +12,43 @@ type Configuration struct {
// Config is the highlevel struct which contains all the configuration of
// your Kerberos Open Source instance.
type Config struct {
Type string `json:"type"`
Key string `json:"key"`
Name string `json:"name"`
FriendlyName string `json:"friendly_name"`
Time string `json:"time" bson:"time"`
Offline string `json:"offline"`
AutoClean string `json:"auto_clean"`
RemoveAfterUpload string `json:"remove_after_upload"`
MaxDirectorySize int64 `json:"max_directory_size"`
Timezone string `json:"timezone,omitempty" bson:"timezone,omitempty"`
Capture Capture `json:"capture"`
Timetable []*Timetable `json:"timetable"`
Region *Region `json:"region"`
Cloud string `json:"cloud" bson:"cloud"`
S3 *S3 `json:"s3,omitempty" bson:"s3,omitempty"`
KStorage *KStorage `json:"kstorage,omitempty" bson:"kstorage,omitempty"`
Dropbox *Dropbox `json:"dropbox,omitempty" bson:"dropbox,omitempty"`
MQTTURI string `json:"mqtturi" bson:"mqtturi,omitempty"`
MQTTUsername string `json:"mqtt_username" bson:"mqtt_username"`
MQTTPassword string `json:"mqtt_password" bson:"mqtt_password"`
STUNURI string `json:"stunuri" bson:"stunuri"`
TURNURI string `json:"turnuri" bson:"turnuri"`
TURNUsername string `json:"turn_username" bson:"turn_username"`
TURNPassword string `json:"turn_password" bson:"turn_password"`
HeartbeatURI string `json:"heartbeaturi" bson:"heartbeaturi"` /*obsolete*/
HubURI string `json:"hub_uri" bson:"hub_uri"`
HubKey string `json:"hub_key" bson:"hub_key"`
HubPrivateKey string `json:"hub_private_key" bson:"hub_private_key"`
HubSite string `json:"hub_site" bson:"hub_site"`
ConditionURI string `json:"condition_uri" bson:"condition_uri"`
Type string `json:"type"`
Key string `json:"key"`
Name string `json:"name"`
FriendlyName string `json:"friendly_name"`
Time string `json:"time" bson:"time"`
Offline string `json:"offline"`
AutoClean string `json:"auto_clean"`
RemoveAfterUpload string `json:"remove_after_upload"`
MaxDirectorySize int64 `json:"max_directory_size"`
Timezone string `json:"timezone"`
Capture Capture `json:"capture"`
Timetable []*Timetable `json:"timetable"`
Region *Region `json:"region"`
Cloud string `json:"cloud" bson:"cloud"`
S3 *S3 `json:"s3,omitempty" bson:"s3,omitempty"`
KStorage *KStorage `json:"kstorage,omitempty" bson:"kstorage,omitempty"`
KStorageSecondary *KStorage `json:"kstorage_secondary,omitempty" bson:"kstorage_secondary,omitempty"`
Dropbox *Dropbox `json:"dropbox,omitempty" bson:"dropbox,omitempty"`
MQTTURI string `json:"mqtturi" bson:"mqtturi,omitempty"`
MQTTUsername string `json:"mqtt_username" bson:"mqtt_username"`
MQTTPassword string `json:"mqtt_password" bson:"mqtt_password"`
STUNURI string `json:"stunuri" bson:"stunuri"`
ForceTurn string `json:"turn_force" bson:"turn_force"`
TURNURI string `json:"turnuri" bson:"turnuri"`
TURNUsername string `json:"turn_username" bson:"turn_username"`
TURNPassword string `json:"turn_password" bson:"turn_password"`
HeartbeatURI string `json:"heartbeaturi" bson:"heartbeaturi"` /*obsolete*/
HubEncryption string `json:"hub_encryption" bson:"hub_encryption"`
HubURI string `json:"hub_uri" bson:"hub_uri"`
HubKey string `json:"hub_key" bson:"hub_key"`
HubPrivateKey string `json:"hub_private_key" bson:"hub_private_key"`
HubSite string `json:"hub_site" bson:"hub_site"`
ConditionURI string `json:"condition_uri" bson:"condition_uri"`
Encryption *Encryption `json:"encryption,omitempty" bson:"encryption,omitempty"`
Signing *Signing `json:"signing,omitempty" bson:"signing,omitempty"`
RealtimeProcessing string `json:"realtimeprocessing,omitempty" bson:"realtimeprocessing,omitempty"`
RealtimeProcessingTopic string `json:"realtimeprocessing_topic" bson:"realtimeprocessing_topic"`
}
// Capture defines which camera type (Id) you are using (IP, USB or Raspberry Pi camera),
@@ -55,9 +62,11 @@ type Capture struct {
Snapshots string `json:"snapshots,omitempty"`
Motion string `json:"motion,omitempty"`
Liveview string `json:"liveview,omitempty"`
LiveviewChunking string `json:"liveview_chunking,omitempty" bson:"liveview_chunking,omitempty"`
Continuous string `json:"continuous,omitempty"`
PostRecording int64 `json:"postrecording"`
PreRecording int64 `json:"prerecording"`
GopSize int `json:"gopsize,omitempty" bson:"gopsize,omitempty"` // GOP size in seconds, used for pre-recording
MaxLengthRecording int64 `json:"maxlengthrecording"`
TranscodingWebRTC string `json:"transcodingwebrtc"`
TranscodingResolution int64 `json:"transcodingresolution"`
@@ -70,13 +79,28 @@ type Capture struct {
// IPCamera configuration, such as the RTSP url of the IPCamera and the FPS.
// Also includes ONVIF integration
type IPCamera struct {
RTSP string `json:"rtsp"`
SubRTSP string `json:"sub_rtsp"`
FPS string `json:"fps"`
ONVIF string `json:"onvif,omitempty" bson:"onvif"`
ONVIFXAddr string `json:"onvif_xaddr,omitempty" bson:"onvif_xaddr"`
ONVIFUsername string `json:"onvif_username,omitempty" bson:"onvif_username"`
ONVIFPassword string `json:"onvif_password,omitempty" bson:"onvif_password"`
RTSP string `json:"rtsp"`
Width int `json:"width"`
Height int `json:"height"`
FPS string `json:"fps"`
SubRTSP string `json:"sub_rtsp"`
SubWidth int `json:"sub_width"`
SubHeight int `json:"sub_height"`
BaseWidth int `json:"base_width"`
BaseHeight int `json:"base_height"`
SubFPS string `json:"sub_fps"`
ONVIF string `json:"onvif,omitempty" bson:"onvif"`
ONVIFXAddr string `json:"onvif_xaddr" bson:"onvif_xaddr"`
ONVIFUsername string `json:"onvif_username" bson:"onvif_username"`
ONVIFPassword string `json:"onvif_password" bson:"onvif_password"`
SPSNALUs [][]byte `json:"sps_nalus,omitempty" bson:"sps_nalus,omitempty"`
PPSNALUs [][]byte `json:"pps_nalus,omitempty" bson:"pps_nalus,omitempty"`
VPSNALUs [][]byte `json:"vps_nalus,omitempty" bson:"vps_nalus,omitempty"`
SampleRate int `json:"sample_rate,omitempty" bson:"sample_rate,omitempty"`
Channels int `json:"channels,omitempty" bson:"channels,omitempty"`
}
// USBCamera configuration, such as the device path (/dev/video*)
@@ -148,6 +172,8 @@ type KStorage struct {
SecretAccessKey string `json:"secret_access_key,omitempty" bson:"secret_access_key,omitempty"`
Provider string `json:"provider,omitempty" bson:"provider,omitempty"`
Directory string `json:"directory,omitempty" bson:"directory,omitempty"`
MaxRetries int `json:"max_retries,omitempty" bson:"max_retries,omitempty"`
Timeout int `json:"timeout,omitempty" bson:"timeout,omitempty"`
}
// Dropbox integration
@@ -155,3 +181,18 @@ type Dropbox struct {
AccessToken string `json:"access_token,omitempty" bson:"access_token,omitempty"`
Directory string `json:"directory,omitempty" bson:"directory,omitempty"`
}
// Encryption
type Encryption struct {
Enabled string `json:"enabled" bson:"enabled"`
Recordings string `json:"recordings" bson:"recordings"`
Fingerprint string `json:"fingerprint" bson:"fingerprint"`
PrivateKey string `json:"private_key" bson:"private_key"`
SymmetricKey string `json:"symmetric_key" bson:"symmetric_key"`
}
// Signing
type Signing struct {
Enabled string `json:"enabled" bson:"enabled"`
PrivateKey string `json:"private_key" bson:"private_key"`
}

View File

@@ -0,0 +1,201 @@
package models
import (
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"encoding/json"
"encoding/pem"
"io"
"strings"
"time"
"github.com/gofrs/uuid"
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/log"
)
func PackageMQTTMessage(configuration *Configuration, msg Message) ([]byte, error) {
// Create a Version 4 UUID.
u2, err := uuid.NewV4()
if err != nil {
log.Log.Error("failed to generate UUID: " + err.Error())
}
// We'll generate an unique id, and encrypt / decrypt it using the private key if available.
msg.Mid = u2.String()
msg.DeviceId = msg.Payload.DeviceId
msg.Timestamp = time.Now().Unix()
// Configuration
config := configuration.Config
// Next to hiding the message, we can also encrypt it using your own private key.
// Which is not stored in a remote environment (hence you are the only one owning it).
msg.Encrypted = false
if config.Encryption != nil && config.Encryption.Enabled == "true" {
msg.Encrypted = true
}
msg.PublicKey = ""
msg.Fingerprint = ""
if msg.Encrypted {
pload := msg.Payload
// Pload to base64
data, err := json.Marshal(pload)
if err != nil {
log.Log.Error("models.mqtt.PackageMQTTMessage(): failed to marshal payload: " + err.Error())
}
// Encrypt the value
privateKey := configuration.Config.Encryption.PrivateKey
r := strings.NewReader(privateKey)
pemBytes, _ := io.ReadAll(r)
block, _ := pem.Decode(pemBytes)
if block == nil {
log.Log.Error("models.mqtt.PackageMQTTMessage(): error decoding PEM block containing private key")
} else {
// Parse private key
b := block.Bytes
key, err := x509.ParsePKCS8PrivateKey(b)
if err != nil {
log.Log.Error("models.mqtt.PackageMQTTMessage(): error parsing private key: " + err.Error())
}
// Conver key to *rsa.PrivateKey
rsaKey, _ := key.(*rsa.PrivateKey)
// Create a 16bit key random
if config.Encryption != nil && config.Encryption.SymmetricKey != "" {
k := config.Encryption.SymmetricKey
encryptedValue, err := encryption.AesEncrypt(data, k)
if err == nil {
data := base64.StdEncoding.EncodeToString(encryptedValue)
// Sign the encrypted value
signature, err := encryption.SignWithPrivateKey([]byte(data), rsaKey)
if err == nil {
base64Signature := base64.StdEncoding.EncodeToString(signature)
msg.Payload.EncryptedValue = data
msg.Payload.Signature = base64Signature
msg.Payload.Value = make(map[string]interface{})
}
}
}
}
}
// We'll hide the message (by default in latest version)
// We will encrypt using the Kerberos Hub private key if set.
msg.Hidden = false
if config.HubEncryption == "true" && config.HubPrivateKey != "" {
msg.Hidden = true
}
if msg.Hidden {
pload := msg.Payload
// Pload to base64
data, err := json.Marshal(pload)
if err != nil {
msg.Hidden = false
} else {
k := config.HubPrivateKey
encryptedValue, err := encryption.AesEncrypt(data, k)
if err == nil {
data := base64.StdEncoding.EncodeToString(encryptedValue)
msg.Payload.HiddenValue = data
msg.Payload.EncryptedValue = ""
msg.Payload.Signature = ""
msg.Payload.Value = make(map[string]interface{})
}
}
}
payload, err := json.Marshal(msg)
return payload, err
}
// The message structure which is used to send over
// and receive messages from the MQTT broker
type Message struct {
Mid string `json:"mid"`
DeviceId string `json:"device_id"`
Timestamp int64 `json:"timestamp"`
Encrypted bool `json:"encrypted"`
Hidden bool `json:"hidden"`
PublicKey string `json:"public_key"`
Fingerprint string `json:"fingerprint"`
Payload Payload `json:"payload"`
}
// The payload structure which is used to send over
// and receive messages from the MQTT broker
type Payload struct {
Version string `json:"version"` // Version of the message, e.g. "1.0"
Action string `json:"action"`
DeviceId string `json:"device_id"`
Signature string `json:"signature"`
EncryptedValue string `json:"encrypted_value"`
HiddenValue string `json:"hidden_value"`
Value map[string]interface{} `json:"value"`
}
// We received a audio input
type AudioPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the recording request.
Data []int16 `json:"data"`
}
// We received a recording request, we'll send it to the motion handler.
type RecordPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the recording request.
}
// We received a preset position request, we'll request it through onvif and send it back.
type PTZPositionPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the preset request.
}
// We received a request config request, we'll fetch the current config and send it back.
type RequestConfigPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the preset request.
}
// We received a update config request, we'll update the current config and send a confirmation back.
type UpdateConfigPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the preset request.
Config Config `json:"config"`
}
// We received a request SD stream request
type RequestSDStreamPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp
}
// We received a request HD stream request
type RequestHDStreamPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp
HubKey string `json:"hub_key"` // hub key
SessionID string `json:"session_id"` // session id
SessionDescription string `json:"session_description"` // session description
}
// We received a receive HD candidates request
type ReceiveHDCandidatesPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp
SessionID string `json:"session_id"` // session id
Candidate string `json:"candidate"` // candidate
}
type NavigatePTZPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp
DeviceId string `json:"device_id"` // device id
Action string `json:"action"` // action
}
type TriggerRelay struct {
Timestamp int64 `json:"timestamp"` // timestamp
DeviceId string `json:"device_id"` // device id
Token string `json:"token"` // token
}

View File

@@ -15,4 +15,10 @@ type OnvifActionPTZ struct {
X float64 `json:"x" bson:"x"`
Y float64 `json:"y" bson:"y"`
Z float64 `json:"z" bson:"z"`
Preset string `json:"preset" bson:"preset"`
}
type OnvifActionPreset struct {
Name string `json:"name" bson:"name"`
Token string `json:"token" bson:"token"`
}

View File

@@ -29,3 +29,8 @@ type OnvifZoom struct {
OnvifCredentials OnvifCredentials `json:"onvif_credentials,omitempty" bson:"onvif_credentials"`
Zoom float64 `json:"zoom,omitempty" bson:"zoom"`
}
type OnvifPreset struct {
OnvifCredentials OnvifCredentials `json:"onvif_credentials,omitempty" bson:"onvif_credentials"`
Preset string `json:"preset,omitempty" bson:"preset"`
}

View File

@@ -1,8 +1,9 @@
package models
type MotionDataPartial struct {
Timestamp int64 `json:"timestamp" bson:"timestamp"`
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Timestamp int64 `json:"timestamp" bson:"timestamp"`
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Rectangle MotionRectangle `json:"rectangle" bson:"rectangle"`
}
type MotionDataFull struct {
@@ -14,3 +15,10 @@ type MotionDataFull struct {
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Token int `json:"token" bson:"token"`
}
type MotionRectangle struct {
X int `json:"x" bson:"x"`
Y int `json:"y" bson:"y"`
Width int `json:"width" bson:"width"`
Height int `json:"height" bson:"height"`
}

View File

@@ -0,0 +1,15 @@
package models
import "time"
// The OutputMessage contains the relevant information
// to specify the type of triggers we want to execute.
type OutputMessage struct {
Name string
Outputs []string
Trigger string
Timestamp time.Time
File string
CameraId string
SiteId string
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,59 @@
package outputs
import (
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
type Output interface {
// Triggers the integration
Trigger(message models.OutputMessage) error
}
func Execute(message *models.OutputMessage) (err error) {
err = nil
outputs := message.Outputs
for _, output := range outputs {
switch output {
case "slack":
slack := &SlackOutput{}
err := slack.Trigger(message)
if err == nil {
log.Log.Debug("outputs.main.Execute(slack): message was processed by output.")
} else {
log.Log.Error("outputs.main.Execute(slack): " + err.Error())
}
break
case "webhook":
webhook := &WebhookOutput{}
err := webhook.Trigger(message)
if err == nil {
log.Log.Debug("outputs.main.Execute(webhook): message was processed by output.")
} else {
log.Log.Error("outputs.main.Execute(webhook): " + err.Error())
}
break
case "onvif_relay":
onvif := &OnvifRelayOutput{}
err := onvif.Trigger(message)
if err == nil {
log.Log.Debug("outputs.main.Execute(onvif): message was processed by output.")
} else {
log.Log.Error("outputs.main.Execute(onvif): " + err.Error())
}
break
case "script":
script := &ScriptOutput{}
err := script.Trigger(message)
if err == nil {
log.Log.Debug("outputs.main.Execute(script): message was processed by output.")
} else {
log.Log.Error("outputs.main.Execute(script): " + err.Error())
}
break
}
}
return err
}

View File

@@ -0,0 +1,12 @@
package outputs
import "github.com/kerberos-io/agent/machinery/src/models"
type OnvifRelayOutput struct {
Output
}
func (o *OnvifRelayOutput) Trigger(message *models.OutputMessage) (err error) {
err = nil
return err
}

View File

@@ -0,0 +1,12 @@
package outputs
import "github.com/kerberos-io/agent/machinery/src/models"
type ScriptOutput struct {
Output
}
func (scr *ScriptOutput) Trigger(message *models.OutputMessage) (err error) {
err = nil
return err
}

View File

@@ -0,0 +1,12 @@
package outputs
import "github.com/kerberos-io/agent/machinery/src/models"
type SlackOutput struct {
Output
}
func (s *SlackOutput) Trigger(message *models.OutputMessage) (err error) {
err = nil
return err
}

View File

@@ -0,0 +1,12 @@
package outputs
import "github.com/kerberos-io/agent/machinery/src/models"
type WebhookOutput struct {
Output
}
func (w *WebhookOutput) Trigger(message *models.OutputMessage) (err error) {
err = nil
return err
}

View File

@@ -0,0 +1,69 @@
package packets
type Buf struct {
Head, Tail BufPos
pkts []Packet
Size int
Count int
}
func NewBuf() *Buf {
return &Buf{
pkts: make([]Packet, 64),
}
}
func (self *Buf) Pop() Packet {
if self.Count == 0 {
panic("pktque.Buf: Pop() when count == 0")
}
i := int(self.Head) & (len(self.pkts) - 1)
pkt := self.pkts[i]
self.pkts[i] = Packet{}
self.Size -= len(pkt.Data)
self.Head++
self.Count--
return pkt
}
func (self *Buf) grow() {
newpkts := make([]Packet, len(self.pkts)*2)
for i := self.Head; i.LT(self.Tail); i++ {
newpkts[int(i)&(len(newpkts)-1)] = self.pkts[int(i)&(len(self.pkts)-1)]
}
self.pkts = newpkts
}
func (self *Buf) Push(pkt Packet) {
if self.Count == len(self.pkts) {
self.grow()
}
self.pkts[int(self.Tail)&(len(self.pkts)-1)] = pkt
self.Tail++
self.Count++
self.Size += len(pkt.Data)
}
func (self *Buf) Get(pos BufPos) Packet {
return self.pkts[int(pos)&(len(self.pkts)-1)]
}
func (self *Buf) IsValidPos(pos BufPos) bool {
return pos.GE(self.Head) && pos.LT(self.Tail)
}
type BufPos int
func (self BufPos) LT(pos BufPos) bool {
return self-pos < 0
}
func (self BufPos) GE(pos BufPos) bool {
return self-pos >= 0
}
func (self BufPos) GT(pos BufPos) bool {
return self-pos > 0
}

View File

@@ -0,0 +1,23 @@
package packets
import (
"time"
"github.com/pion/rtp"
)
// Packet represents an RTP Packet
type Packet struct {
Packet *rtp.Packet
IsAudio bool // packet is audio
IsVideo bool // packet is video
IsKeyFrame bool // video packet is key frame
Idx int8 // stream index in container format
Codec string // codec name
CompositionTime int64 // packet presentation time minus decode time for H264 B-Frame
Time int64 // packet decode time
TimeLegacy time.Duration
CurrentTime int64 // current time in milliseconds (UNIX timestamp)
Data []byte // packet data
Gopsize int // size of the GOP
}

View File

@@ -0,0 +1,229 @@
// Packege pubsub implements publisher-subscribers model used in multi-channel streaming.
package packets
import (
"io"
"sync"
)
// time
// ----------------->
//
// V-A-V-V-A-V-V-A-V-V
// | |
// 0 5 10
// head tail
// oldest latest
//
// One publisher and multiple subscribers thread-safe packet buffer queue.
type Queue struct {
buf *Buf
head, tail int
lock *sync.RWMutex
cond *sync.Cond
curgopcount, maxgopcount int
streams []Stream
videoidx int
closed bool
}
func NewQueue() *Queue {
q := &Queue{}
q.buf = NewBuf()
q.maxgopcount = 2
q.lock = &sync.RWMutex{}
q.cond = sync.NewCond(q.lock.RLocker())
q.videoidx = -1
return q
}
func (self *Queue) SetMaxGopCount(n int) {
self.lock.Lock()
self.maxgopcount = n
self.lock.Unlock()
return
}
func (self *Queue) GetMaxGopCount() int {
n := self.maxgopcount
return n
}
func (self *Queue) WriteHeader(streams []Stream) error {
self.lock.Lock()
self.streams = streams
for i, stream := range streams {
if stream.IsVideo {
self.videoidx = i
}
}
self.cond.Broadcast()
self.lock.Unlock()
return nil
}
func (self *Queue) WriteTrailer() error {
return nil
}
// After Close() called, all QueueCursor's ReadPacket will return io.EOF.
func (self *Queue) Close() (err error) {
self.lock.Lock()
self.closed = true
self.cond.Broadcast()
// Close all QueueCursor's ReadPacket
for i := 0; i < self.buf.Size; i++ {
pkt := self.buf.Pop()
pkt.Data = nil
}
self.lock.Unlock()
return
}
func (self *Queue) GetSize() int {
return self.buf.Count
}
// Put packet into buffer, old packets will be discared.
func (self *Queue) WritePacket(pkt Packet) (err error) {
self.lock.Lock()
self.buf.Push(pkt)
if pkt.Idx == int8(self.videoidx) && pkt.IsKeyFrame {
self.curgopcount++
}
for self.curgopcount >= self.maxgopcount && self.buf.Count > 1 {
pkt := self.buf.Pop()
if pkt.Idx == int8(self.videoidx) && pkt.IsKeyFrame {
self.curgopcount--
}
if self.curgopcount < self.maxgopcount {
break
}
}
//println("shrink", self.curgopcount, self.maxgopcount, self.buf.Head, self.buf.Tail, "count", self.buf.Count, "size", self.buf.Size)
self.cond.Broadcast()
self.lock.Unlock()
return
}
type QueueCursor struct {
que *Queue
pos BufPos
gotpos bool
init func(buf *Buf, videoidx int) BufPos
}
func (self *Queue) newCursor() *QueueCursor {
return &QueueCursor{
que: self,
}
}
// Create cursor position at latest packet.
func (self *Queue) Latest() *QueueCursor {
cursor := self.newCursor()
cursor.init = func(buf *Buf, videoidx int) BufPos {
return buf.Tail
}
return cursor
}
// Create cursor position at oldest buffered packet.
func (self *Queue) Oldest() *QueueCursor {
cursor := self.newCursor()
cursor.init = func(buf *Buf, videoidx int) BufPos {
return buf.Head
}
return cursor
}
// Create cursor position at specific time in buffered packets.
func (self *Queue) DelayedTime(dur int64) *QueueCursor {
cursor := self.newCursor()
cursor.init = func(buf *Buf, videoidx int) BufPos {
i := buf.Tail - 1
if buf.IsValidPos(i) {
end := buf.Get(i)
for buf.IsValidPos(i) {
if end.Time-buf.Get(i).Time > dur {
break
}
i--
}
}
return i
}
return cursor
}
// Create cursor position at specific delayed GOP count in buffered packets.
func (self *Queue) DelayedGopCount(n int) *QueueCursor {
cursor := self.newCursor()
cursor.init = func(buf *Buf, videoidx int) BufPos {
i := buf.Tail - 1
if videoidx != -1 {
for gop := 0; buf.IsValidPos(i) && gop < n; i-- {
pkt := buf.Get(i)
if pkt.Idx == int8(self.videoidx) && pkt.IsKeyFrame {
gop++
}
}
}
return i
}
return cursor
}
func (self *QueueCursor) Streams() (streams []Stream, err error) {
self.que.cond.L.Lock()
for self.que.streams == nil && !self.que.closed {
self.que.cond.Wait()
}
if self.que.streams != nil {
streams = self.que.streams
} else {
err = io.EOF
}
self.que.cond.L.Unlock()
return
}
// ReadPacket will not consume packets in Queue, it's just a cursor.
func (self *QueueCursor) ReadPacket() (pkt Packet, err error) {
self.que.cond.L.Lock()
buf := self.que.buf
if !self.gotpos {
self.pos = self.init(buf, self.que.videoidx)
self.gotpos = true
}
for {
if self.pos.LT(buf.Head) {
self.pos = buf.Head
} else if self.pos.GT(buf.Tail) {
self.pos = buf.Tail
}
if buf.IsValidPos(self.pos) {
pkt = buf.Get(self.pos)
self.pos++
break
}
if self.que.closed {
err = io.EOF
break
}
self.que.cond.Wait()
}
self.que.cond.L.Unlock()
return
}

View File

@@ -0,0 +1,54 @@
package packets
type Stream struct {
// The ID of the stream.
Index int `json:"index" bson:"index"`
// The name of the stream.
Name string
// The URL of the stream.
URL string
// Is the stream a video stream.
IsVideo bool
// Is the stream a audio stream.
IsAudio bool
// The width of the stream.
Width int
// The height of the stream.
Height int
// Num is the numerator of the framerate.
Num int
// Denum is the denominator of the framerate.
Denum int
// FPS is the framerate of the stream.
FPS float64
// For H264, this is the sps.
SPS []byte
// For H264, this is the pps.
PPS []byte
// For H265, this is the vps.
VPS []byte
// IsBackChannel is true if this stream is a back channel.
IsBackChannel bool
// SampleRate is the sample rate of the audio stream.
SampleRate int
// Channels is the number of audio channels.
Channels int
// GopSize is the size of the GOP (Group of Pictures).
GopSize int
}

View File

@@ -0,0 +1,60 @@
package packets
import (
"time"
)
/*
pop push
seg seg seg
|--------| |---------| |---|
20ms 40ms 5ms
----------------- time -------------------->
headtm tailtm
*/
type tlSeg struct {
tm, dur time.Duration
}
type Timeline struct {
segs []tlSeg
headtm time.Duration
}
func (self *Timeline) Push(tm time.Duration, dur time.Duration) {
if len(self.segs) > 0 {
tail := self.segs[len(self.segs)-1]
diff := tm - (tail.tm + tail.dur)
if diff < 0 {
tm -= diff
}
}
self.segs = append(self.segs, tlSeg{tm, dur})
}
func (self *Timeline) Pop(dur time.Duration) (tm time.Duration) {
if len(self.segs) == 0 {
return self.headtm
}
tm = self.segs[0].tm
for dur > 0 && len(self.segs) > 0 {
seg := &self.segs[0]
sub := dur
if seg.dur < sub {
sub = seg.dur
}
seg.dur -= sub
dur -= sub
seg.tm += sub
self.headtm += sub
if seg.dur == 0 {
copy(self.segs[0:], self.segs[1:])
self.segs = self.segs[:len(self.segs)-1]
}
}
return
}

View File

@@ -1,252 +0,0 @@
package http
import (
"github.com/gin-gonic/gin"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/onvif"
)
// Login godoc
// @Router /api/login [post]
// @ID login
// @Tags authentication
// @Summary Get Authorization token.
// @Description Get Authorization token.
// @Param credentials body models.Authentication true "Credentials"
// @Success 200 {object} models.Authorization
func Login() {}
// LoginToOnvif godoc
// @Router /api/camera/onvif/login [post]
// @ID camera-onvif-login
// @Tags camera
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Try to login into ONVIF supported camera.
// @Description Try to login into ONVIF supported camera.
// @Success 200 {object} models.APIResponse
func LoginToOnvif(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
c.JSON(200, gin.H{
"device": device,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// GetOnvifCapabilities godoc
// @Router /api/camera/onvif/capabilities [post]
// @ID camera-onvif-capabilities
// @Tags camera
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Will return the ONVIF capabilities for the specific camera.
// @Description Will return the ONVIF capabilities for the specific camera.
// @Success 200 {object} models.APIResponse
func GetOnvifCapabilities(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
c.JSON(200, gin.H{
"capabilities": onvif.GetCapabilitiesFromDevice(device),
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// DoOnvifPanTilt godoc
// @Router /api/camera/onvif/pantilt [post]
// @ID camera-onvif-pantilt
// @Tags camera
// @Param panTilt body models.OnvifPanTilt true "OnvifPanTilt"
// @Summary Panning or/and tilting the camera.
// @Description Panning or/and tilting the camera using a direction (x,y).
// @Success 200 {object} models.APIResponse
func DoOnvifPanTilt(c *gin.Context) {
var onvifPanTilt models.OnvifPanTilt
err := c.BindJSON(&onvifPanTilt)
if err == nil && onvifPanTilt.OnvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifPanTilt.OnvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifPanTilt.OnvifCredentials.ONVIFUsername,
ONVIFPassword: onvifPanTilt.OnvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get token from the first profile
token, err := onvif.GetTokenFromProfile(device, 0)
if err == nil {
// Get the configurations from the device
ptzConfigurations, err := onvif.GetPTZConfigurationsFromDevice(device)
if err == nil {
pan := onvifPanTilt.Pan
tilt := onvifPanTilt.Tilt
err := onvif.ContinuousPanTilt(device, ptzConfigurations, token, pan, tilt)
if err == nil {
c.JSON(200, models.APIResponse{
Message: "Successfully pan/tilted the camera",
})
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
}
// DoOnvifZoom godoc
// @Router /api/camera/onvif/zoom [post]
// @ID camera-onvif-zoom
// @Tags camera
// @Param zoom body models.OnvifZoom true "OnvifZoom"
// @Summary Zooming in or out the camera.
// @Description Zooming in or out the camera.
// @Success 200 {object} models.APIResponse
func DoOnvifZoom(c *gin.Context) {
var onvifZoom models.OnvifZoom
err := c.BindJSON(&onvifZoom)
if err == nil && onvifZoom.OnvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifZoom.OnvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifZoom.OnvifCredentials.ONVIFUsername,
ONVIFPassword: onvifZoom.OnvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get token from the first profile
token, err := onvif.GetTokenFromProfile(device, 0)
if err == nil {
// Get the PTZ configurations from the device
ptzConfigurations, err := onvif.GetPTZConfigurationsFromDevice(device)
if err == nil {
zoom := onvifZoom.Zoom
err := onvif.ContinuousZoom(device, ptzConfigurations, token, zoom)
if err == nil {
c.JSON(200, models.APIResponse{
Message: "Successfully zoomed the camera",
})
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
}

View File

@@ -1,240 +0,0 @@
package http
import (
"image"
"time"
jwt "github.com/appleboy/gin-jwt/v2"
"github.com/gin-gonic/gin"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/onvif"
"github.com/kerberos-io/agent/machinery/src/routers/websocket"
"github.com/kerberos-io/agent/machinery/src/cloud"
"github.com/kerberos-io/agent/machinery/src/components"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/utils"
)
func AddRoutes(r *gin.Engine, authMiddleware *jwt.GinJWTMiddleware, configDirectory string, configuration *models.Configuration, communication *models.Communication) *gin.RouterGroup {
r.GET("/ws", func(c *gin.Context) {
websocket.WebsocketHandler(c, communication)
})
// This is legacy should be removed in future! Now everything
// lives under the /api prefix.
r.GET("/config", func(c *gin.Context) {
c.JSON(200, gin.H{
"config": configuration.Config,
"custom": configuration.CustomConfig,
"global": configuration.GlobalConfig,
"snapshot": communication.Image,
})
})
// This is legacy should be removed in future! Now everything
// lives under the /api prefix.
r.POST("/config", func(c *gin.Context) {
var config models.Config
err := c.BindJSON(&config)
if err == nil {
err := components.SaveConfig(configDirectory, config, configuration, communication)
if err == nil {
c.JSON(200, gin.H{
"data": "☄ Reconfiguring",
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
})
api := r.Group("/api")
{
api.POST("/login", authMiddleware.LoginHandler)
api.GET("/dashboard", func(c *gin.Context) {
// Check if camera is online.
cameraIsOnline := communication.CameraConnected
// If an agent is properly setup with Kerberos Hub, we will send
// a ping to Kerberos Hub every 15seconds. On receiving a positive response
// it will update the CloudTimestamp value.
cloudIsOnline := false
if communication.CloudTimestamp != nil && communication.CloudTimestamp.Load() != nil {
timestamp := communication.CloudTimestamp.Load().(int64)
if timestamp > 0 {
cloudIsOnline = true
}
}
// The total number of recordings stored in the directory.
recordingDirectory := configDirectory + "/data/recordings"
numberOfRecordings := utils.NumberOfMP4sInDirectory(recordingDirectory)
// All days stored in this agent.
days := []string{}
latestEvents := []models.Media{}
files, err := utils.ReadDirectory(recordingDirectory)
if err == nil {
events := utils.GetSortedDirectory(files)
// Get All days
days = utils.GetDays(events, recordingDirectory, configuration)
// Get all latest events
var eventFilter models.EventFilter
eventFilter.NumberOfElements = 5
latestEvents = utils.GetMediaFormatted(events, recordingDirectory, configuration, eventFilter) // will get 5 latest recordings.
}
c.JSON(200, gin.H{
"offlineMode": configuration.Config.Offline,
"cameraOnline": cameraIsOnline,
"cloudOnline": cloudIsOnline,
"numberOfRecordings": numberOfRecordings,
"days": days,
"latestEvents": latestEvents,
})
})
api.POST("/latest-events", func(c *gin.Context) {
var eventFilter models.EventFilter
err := c.BindJSON(&eventFilter)
if err == nil {
// Default to 10 if no limit is set.
if eventFilter.NumberOfElements == 0 {
eventFilter.NumberOfElements = 10
}
recordingDirectory := configDirectory + "/data/recordings"
files, err := utils.ReadDirectory(recordingDirectory)
if err == nil {
events := utils.GetSortedDirectory(files)
// We will get all recordings from the directory (as defined by the filter).
fileObjects := utils.GetMediaFormatted(events, recordingDirectory, configuration, eventFilter)
c.JSON(200, gin.H{
"events": fileObjects,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
})
api.GET("/days", func(c *gin.Context) {
recordingDirectory := configDirectory + "/data/recordings"
files, err := utils.ReadDirectory(recordingDirectory)
if err == nil {
events := utils.GetSortedDirectory(files)
days := utils.GetDays(events, recordingDirectory, configuration)
c.JSON(200, gin.H{
"events": days,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
})
api.GET("/config", func(c *gin.Context) {
c.JSON(200, gin.H{
"config": configuration.Config,
"custom": configuration.CustomConfig,
"global": configuration.GlobalConfig,
"snapshot": communication.Image,
})
})
api.POST("/config", func(c *gin.Context) {
var config models.Config
err := c.BindJSON(&config)
if err == nil {
err := components.SaveConfig(configDirectory, config, configuration, communication)
if err == nil {
c.JSON(200, gin.H{
"data": "☄ Reconfiguring",
})
} else {
c.JSON(200, gin.H{
"data": "☄ Reconfiguring",
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
})
api.GET("/restart", func(c *gin.Context) {
communication.HandleBootstrap <- "restart"
c.JSON(200, gin.H{
"restarted": true,
})
})
api.GET("/stop", func(c *gin.Context) {
communication.HandleBootstrap <- "stop"
c.JSON(200, gin.H{
"stopped": true,
})
})
api.POST("/onvif/verify", func(c *gin.Context) {
onvif.VerifyOnvifConnection(c)
})
api.POST("/hub/verify", func(c *gin.Context) {
cloud.VerifyHub(c)
})
api.POST("/persistence/verify", func(c *gin.Context) {
cloud.VerifyPersistence(c, configDirectory)
})
// Streaming handler
api.GET("/stream", func(c *gin.Context) {
// TODO add a token validation!
imageFunction := func() (image.Image, error) {
// We will only send an image once per second.
time.Sleep(time.Second * 1)
log.Log.Info("AddRoutes (/stream): reading from MJPEG stream")
img, err := components.GetImageFromFilePath(configDirectory)
return img, err
}
h := components.StartMotionJPEG(imageFunction, 80)
h.ServeHTTP(c.Writer, c.Request)
})
// Camera specific methods. Doesn't require any authorization.
// These are available for anyone, but require the agent, to reach
// the camera.
api.POST("/camera/onvif/login", LoginToOnvif)
api.POST("/camera/onvif/capabilities", GetOnvifCapabilities)
api.POST("/camera/onvif/pantilt", DoOnvifPanTilt)
api.POST("/camera/onvif/zoom", DoOnvifZoom)
api.POST("/camera/verify/:streamType", capture.VerifyCamera)
// Secured endpoints..
api.Use(authMiddleware.MiddlewareFunc())
{
}
}
return api
}

View File

@@ -1,7 +1,9 @@
package http
import (
"io"
"os"
"strconv"
jwt "github.com/appleboy/gin-jwt/v2"
"github.com/gin-contrib/pprof"
@@ -12,6 +14,8 @@ import (
"log"
_ "github.com/kerberos-io/agent/machinery/docs"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/models"
swaggerFiles "github.com/swaggo/files"
ginSwagger "github.com/swaggo/gin-swagger"
@@ -35,12 +39,15 @@ import (
// @in header
// @name Authorization
func StartServer(configDirectory string, configuration *models.Configuration, communication *models.Communication) {
func StartServer(configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
// Set release mode
gin.SetMode(gin.ReleaseMode)
// Initialize REST API
r := gin.Default()
// Profileerggerg
// Profiler
pprof.Register(r)
// Setup CORS
@@ -57,7 +64,7 @@ func StartServer(configDirectory string, configuration *models.Configuration, co
}
// Add all routes
AddRoutes(r, authMiddleware, configDirectory, configuration, communication)
AddRoutes(r, authMiddleware, configDirectory, configuration, communication, captureDevice)
// Update environment variables
environmentVariables := configDirectory + "/www/env.js"
@@ -77,7 +84,7 @@ func StartServer(configDirectory string, configuration *models.Configuration, co
r.Use(static.Serve("/settings", static.LocalFile(configDirectory+"/www", true)))
r.Use(static.Serve("/login", static.LocalFile(configDirectory+"/www", true)))
r.Handle("GET", "/file/*filepath", func(c *gin.Context) {
Files(c, configDirectory)
Files(c, configDirectory, configuration)
})
// Run the api on port
@@ -87,8 +94,51 @@ func StartServer(configDirectory string, configuration *models.Configuration, co
}
}
func Files(c *gin.Context, configDirectory string) {
c.Header("Access-Control-Allow-Origin", "*")
c.Header("Content-Type", "video/mp4")
c.File(configDirectory + "/data/recordings" + c.Param("filepath"))
func Files(c *gin.Context, configDirectory string, configuration *models.Configuration) {
// Get File
filePath := configDirectory + "/data/recordings" + c.Param("filepath")
_, err := os.Open(filePath)
if err != nil {
c.JSON(404, gin.H{"error": "File not found"})
return
}
contents, err := os.ReadFile(filePath)
if err == nil {
// Get symmetric key
symmetricKey := configuration.Config.Encryption.SymmetricKey
encryptedRecordings := configuration.Config.Encryption.Recordings
// Decrypt file
if encryptedRecordings == "true" && symmetricKey != "" {
// Read file
if err != nil {
c.JSON(404, gin.H{"error": "File not found"})
return
}
// Decrypt file
contents, err = encryption.AesDecrypt(contents, symmetricKey)
if err != nil {
c.JSON(404, gin.H{"error": "File not found"})
return
}
}
// Get fileSize from contents
fileSize := len(contents)
// Send file to gin
c.Header("Access-Control-Allow-Origin", "*")
c.Header("Content-Disposition", "attachment; filename="+filePath)
c.Header("Content-Type", "video/mp4")
c.Header("Content-Length", strconv.Itoa(fileSize))
// Send contents to gin
io.WriteString(c.Writer, string(contents))
} else {
c.JSON(404, gin.H{"error": "File not found"})
return
}
}

View File

@@ -0,0 +1,590 @@
package http
import (
"github.com/gin-gonic/gin"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/onvif"
)
// Login godoc
// @Router /api/login [post]
// @ID login
// @Tags authentication
// @Summary Get Authorization token.
// @Description Get Authorization token.
// @Param credentials body models.Authentication true "Credentials"
// @Success 200 {object} models.Authorization
func Login() {}
// LoginToOnvif godoc
// @Router /api/camera/onvif/login [post]
// @ID camera-onvif-login
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Try to login into ONVIF supported camera.
// @Description Try to login into ONVIF supported camera.
// @Success 200 {object} models.APIResponse
func LoginToOnvif(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, capabilities, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get token from the first profile
token, err := onvif.GetTokenFromProfile(device, 0)
if err == nil {
c.JSON(200, gin.H{
"device": device,
"capabilities": capabilities,
"token": token,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// GetOnvifCapabilities godoc
// @Router /api/camera/onvif/capabilities [post]
// @ID camera-onvif-capabilities
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Will return the ONVIF capabilities for the specific camera.
// @Description Will return the ONVIF capabilities for the specific camera.
// @Success 200 {object} models.APIResponse
func GetOnvifCapabilities(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
_, capabilities, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
c.JSON(200, gin.H{
"capabilities": capabilities,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// DoOnvifPanTilt godoc
// @Router /api/camera/onvif/pantilt [post]
// @ID camera-onvif-pantilt
// @Tags onvif
// @Param panTilt body models.OnvifPanTilt true "OnvifPanTilt"
// @Summary Panning or/and tilting the camera.
// @Description Panning or/and tilting the camera using a direction (x,y).
// @Success 200 {object} models.APIResponse
func DoOnvifPanTilt(c *gin.Context) {
var onvifPanTilt models.OnvifPanTilt
err := c.BindJSON(&onvifPanTilt)
if err == nil && onvifPanTilt.OnvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifPanTilt.OnvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifPanTilt.OnvifCredentials.ONVIFUsername,
ONVIFPassword: onvifPanTilt.OnvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get token from the first profile
token, err := onvif.GetTokenFromProfile(device, 0)
if err == nil {
// Get the configurations from the device
ptzConfigurations, err := onvif.GetPTZConfigurationsFromDevice(device)
if err == nil {
pan := onvifPanTilt.Pan
tilt := onvifPanTilt.Tilt
err := onvif.ContinuousPanTilt(device, ptzConfigurations, token, pan, tilt)
if err == nil {
c.JSON(200, models.APIResponse{
Message: "Successfully pan/tilted the camera",
})
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
}
// DoOnvifZoom godoc
// @Router /api/camera/onvif/zoom [post]
// @ID camera-onvif-zoom
// @Tags onvif
// @Param zoom body models.OnvifZoom true "OnvifZoom"
// @Summary Zooming in or out the camera.
// @Description Zooming in or out the camera.
// @Success 200 {object} models.APIResponse
func DoOnvifZoom(c *gin.Context) {
var onvifZoom models.OnvifZoom
err := c.BindJSON(&onvifZoom)
if err == nil && onvifZoom.OnvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifZoom.OnvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifZoom.OnvifCredentials.ONVIFUsername,
ONVIFPassword: onvifZoom.OnvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get token from the first profile
token, err := onvif.GetTokenFromProfile(device, 0)
if err == nil {
// Get the PTZ configurations from the device
ptzConfigurations, err := onvif.GetPTZConfigurationsFromDevice(device)
if err == nil {
zoom := onvifZoom.Zoom
err := onvif.ContinuousZoom(device, ptzConfigurations, token, zoom)
if err == nil {
c.JSON(200, models.APIResponse{
Message: "Successfully zoomed the camera",
})
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, models.APIResponse{
Message: "Something went wrong: " + err.Error(),
})
}
}
// GetOnvifPresets godoc
// @Router /api/camera/onvif/presets [post]
// @ID camera-onvif-presets
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Will return the ONVIF presets for the specific camera.
// @Description Will return the ONVIF presets for the specific camera.
// @Success 200 {object} models.APIResponse
func GetOnvifPresets(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
presets, err := onvif.GetPresetsFromDevice(device)
if err == nil {
c.JSON(200, gin.H{
"presets": presets,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// GoToOnvifPReset godoc
// @Router /api/camera/onvif/gotopreset [post]
// @ID camera-onvif-gotopreset
// @Tags onvif
// @Param config body models.OnvifPreset true "OnvifPreset"
// @Summary Will activate the desired ONVIF preset.
// @Description Will activate the desired ONVIF preset.
// @Success 200 {object} models.APIResponse
func GoToOnvifPreset(c *gin.Context) {
var onvifPreset models.OnvifPreset
err := c.BindJSON(&onvifPreset)
if err == nil && onvifPreset.OnvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifPreset.OnvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifPreset.OnvifCredentials.ONVIFUsername,
ONVIFPassword: onvifPreset.OnvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
err := onvif.GoToPresetFromDevice(device, onvifPreset.Preset)
if err == nil {
c.JSON(200, gin.H{
"data": "Camera preset activated: " + onvifPreset.Preset,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// DoGetDigitalInputs godoc
// @Router /api/camera/onvif/inputs [post]
// @ID get-digital-inputs
// @Security Bearer
// @securityDefinitions.apikey Bearer
// @in header
// @name Authorization
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Will get the digital inputs from the ONVIF device.
// @Description Will get the digital inputs from the ONVIF device.
// @Success 200 {object} models.APIResponse
func DoGetDigitalInputs(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
onvifInputs, _ := onvif.GetDigitalInputs(device)
if err == nil {
// Get the digital inputs and outputs from the device
inputOutputs, err := onvif.GetInputOutputs()
if err == nil {
if err == nil {
// Get the digital outputs from the device
var inputs []onvif.ONVIFEvents
for _, event := range inputOutputs {
if event.Type == "input" {
inputs = append(inputs, event)
}
}
// Iterate over inputs from onvif and compare
for _, input := range onvifInputs.DigitalInputs {
find := false
for _, event := range inputs {
key := string(input.Token)
if key == event.Key {
find = true
}
}
if !find {
key := string(input.Token)
inputs = append(inputs, onvif.ONVIFEvents{
Key: key,
Type: "input",
})
}
}
c.JSON(200, gin.H{
"data": inputs,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// DoGetRelayOutputs godoc
// @Router /api/camera/onvif/outputs [post]
// @ID get-relay-outputs
// @Security Bearer
// @securityDefinitions.apikey Bearer
// @in header
// @name Authorization
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Summary Will get the relay outputs from the ONVIF device.
// @Description Will get the relay outputs from the ONVIF device.
// @Success 200 {object} models.APIResponse
func DoGetRelayOutputs(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
if err == nil && onvifCredentials.ONVIFXAddr != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
_, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Get the digital inputs and outputs from the device
inputOutputs, err := onvif.GetInputOutputs()
if err == nil {
if err == nil {
// Get the digital outputs from the device
var outputs []onvif.ONVIFEvents
for _, event := range inputOutputs {
if event.Type == "output" {
outputs = append(outputs, event)
}
}
c.JSON(200, gin.H{
"data": outputs,
})
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
} else {
c.JSON(400, gin.H{
"data": "Something went wrong: " + err.Error(),
})
}
}
// DoTriggerRelayOutput godoc
// @Router /api/camera/onvif/outputs/{output} [post]
// @ID trigger-relay-output
// @Security Bearer
// @securityDefinitions.apikey Bearer
// @in header
// @name Authorization
// @Tags onvif
// @Param config body models.OnvifCredentials true "OnvifCredentials"
// @Param output path string true "Output"
// @Summary Will trigger the relay output from the ONVIF device.
// @Description Will trigger the relay output from the ONVIF device.
// @Success 200 {object} models.APIResponse
func DoTriggerRelayOutput(c *gin.Context) {
var onvifCredentials models.OnvifCredentials
err := c.BindJSON(&onvifCredentials)
// Get the output from the url
output := c.Param("output")
if err == nil && onvifCredentials.ONVIFXAddr != "" && output != "" {
configuration := &models.Configuration{
Config: models.Config{
Capture: models.Capture{
IPCamera: models.IPCamera{
ONVIFXAddr: onvifCredentials.ONVIFXAddr,
ONVIFUsername: onvifCredentials.ONVIFUsername,
ONVIFPassword: onvifCredentials.ONVIFPassword,
},
},
},
}
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
err := onvif.TriggerRelayOutput(device, output)
if err == nil {
msg := "relay output triggered: " + output
log.Log.Info("routers.http.methods.DoTriggerRelayOutput(): " + msg)
c.JSON(200, gin.H{
"data": msg,
})
} else {
msg := "something went wrong: " + err.Error()
log.Log.Error("routers.http.methods.DoTriggerRelayOutput(): " + msg)
c.JSON(400, gin.H{
"data": msg,
})
}
} else {
msg := "something went wrong: " + err.Error()
log.Log.Error("routers.http.methods.DoTriggerRelayOutput(): " + msg)
c.JSON(400, gin.H{
"data": msg,
})
}
} else {
msg := "something went wrong: " + err.Error()
log.Log.Error("routers.http.methods.DoTriggerRelayOutput(): " + msg)
c.JSON(400, gin.H{
"data": msg,
})
}
}

View File

@@ -0,0 +1,116 @@
package http
import (
jwt "github.com/appleboy/gin-jwt/v2"
"github.com/gin-gonic/gin"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/components"
"github.com/kerberos-io/agent/machinery/src/onvif"
"github.com/kerberos-io/agent/machinery/src/routers/websocket"
"github.com/kerberos-io/agent/machinery/src/cloud"
"github.com/kerberos-io/agent/machinery/src/models"
)
func AddRoutes(r *gin.Engine, authMiddleware *jwt.GinJWTMiddleware, configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) *gin.RouterGroup {
r.GET("/ws", func(c *gin.Context) {
websocket.WebsocketHandler(c, configuration, communication, captureDevice)
})
// This is legacy should be removed in future! Now everything
// lives under the /api prefix.
r.GET("/config", func(c *gin.Context) {
components.GetConfig(c, captureDevice, configuration, communication)
})
// This is legacy should be removed in future! Now everything
// lives under the /api prefix.
r.POST("/config", func(c *gin.Context) {
components.UpdateConfig(c, configDirectory, configuration, communication)
})
api := r.Group("/api")
{
api.POST("/login", authMiddleware.LoginHandler)
api.GET("/dashboard", func(c *gin.Context) {
components.GetDashboard(c, configDirectory, configuration, communication)
})
api.POST("/latest-events", func(c *gin.Context) {
components.GetLatestEvents(c, configDirectory, configuration, communication)
})
api.GET("/days", func(c *gin.Context) {
components.GetDays(c, configDirectory, configuration, communication)
})
api.GET("/config", func(c *gin.Context) {
components.GetConfig(c, captureDevice, configuration, communication)
})
api.POST("/config", func(c *gin.Context) {
components.UpdateConfig(c, configDirectory, configuration, communication)
})
// Will verify the hub settings.
api.POST("/hub/verify", func(c *gin.Context) {
cloud.VerifyHub(c)
})
// Will verify the persistence settings.
api.POST("/persistence/verify", func(c *gin.Context) {
cloud.VerifyPersistence(c, configDirectory)
})
// Will verify the secondary persistence settings.
api.POST("/persistence/secondary/verify", func(c *gin.Context) {
cloud.VerifySecondaryPersistence(c, configDirectory)
})
// Camera specific methods. Doesn't require any authorization.
// These are available for anyone, but require the agent, to reach
// the camera.
api.POST("/camera/restart", func(c *gin.Context) {
components.RestartAgent(c, communication)
})
api.POST("/camera/stop", func(c *gin.Context) {
components.StopAgent(c, communication)
})
api.POST("/camera/record", func(c *gin.Context) {
components.MakeRecording(c, communication)
})
api.GET("/camera/snapshot/jpeg", func(c *gin.Context) {
components.GetSnapshotRaw(c, captureDevice, configuration, communication)
})
api.GET("/camera/snapshot/base64", func(c *gin.Context) {
components.GetSnapshotBase64(c, captureDevice, configuration, communication)
})
// Onvif specific methods. Doesn't require any authorization.
// Will verify the current onvif settings.
api.POST("/camera/onvif/verify", onvif.VerifyOnvifConnection)
api.POST("/camera/onvif/login", LoginToOnvif)
api.POST("/camera/onvif/capabilities", GetOnvifCapabilities)
api.POST("/camera/onvif/presets", GetOnvifPresets)
api.POST("/camera/onvif/gotopreset", GoToOnvifPreset)
api.POST("/camera/onvif/pantilt", DoOnvifPanTilt)
api.POST("/camera/onvif/zoom", DoOnvifZoom)
api.POST("/camera/onvif/inputs", DoGetDigitalInputs)
api.POST("/camera/onvif/outputs", DoGetRelayOutputs)
api.POST("/camera/onvif/outputs/:output", DoTriggerRelayOutput)
api.POST("/camera/verify/:streamType", capture.VerifyCamera)
// Secured endpoints..
api.Use(authMiddleware.MiddlewareFunc())
{
}
}
return api
}

View File

@@ -1,10 +1,11 @@
package routers
import (
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/routers/http"
)
func StartWebserver(configDirectory string, configuration *models.Configuration, communication *models.Communication) {
http.StartServer(configDirectory, configuration, communication)
func StartWebserver(configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
http.StartServer(configDirectory, configuration, communication, captureDevice)
}

View File

@@ -1,38 +1,27 @@
package mqtt
import (
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"encoding/json"
"encoding/pem"
"fmt"
"io/ioutil"
"math/rand"
"strconv"
"strings"
"time"
mqtt "github.com/eclipse/paho.mqtt.golang"
configService "github.com/kerberos-io/agent/machinery/src/config"
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/onvif"
"github.com/kerberos-io/agent/machinery/src/webrtc"
)
// The message structure which is used to send over
// and receive messages from the MQTT broker
type Message struct {
Mid string `json:"mid"`
Timestamp int64 `json:"timestamp"`
Encrypted bool `json:"encrypted"`
PublicKey string `json:"public_key"`
Fingerprint string `json:"fingerprint"`
Payload Payload `json:"payload"`
}
// The payload structure which is used to send over
// and receive messages from the MQTT broker
type Payload struct {
Action string `json:"action"`
DeviceId string `json:"device_id"`
Value map[string]interface{} `json:"value"`
}
// We'll cache the MQTT settings to know if we need to reinitialize the MQTT client connection.
// If we update the configuration but no new MQTT settings are provided, we don't need to restart it.
var PREV_MQTTURI string
@@ -54,51 +43,18 @@ func HasMQTTClientModified(configuration *models.Configuration) bool {
return false
}
func PackageMQTTMessage(msg Message) ([]byte, error) {
// We'll generate an unique id, and encrypt it using the private key.
msg.Mid = "0123456789+1"
msg.Timestamp = time.Now().Unix()
msg.Encrypted = false
msg.PublicKey = ""
msg.Fingerprint = ""
payload, err := json.Marshal(msg)
return payload, err
}
// Configuring MQTT to subscribe for various bi-directional messaging
// Listen and reply (a generic method to share and retrieve information)
//
// !!! NEW METHOD TO COMMUNICATE: only create a single subscription for all communication.
// and an additional publish messages back
//
// - [SUBSCRIPTION] kerberos/agent/{hubkey} (hub -> agent)
// - [PUBLISH] kerberos/hub/{hubkey} (agent -> hub)
//
// !!! LEGACY METHODS BELOW, WE SHOULD LEVERAGE THE ABOVE METHOD!
//
// [SUBSCRIPTIONS]
//
// SD Streaming (Base64 JPEGs)
// - kerberos/{hubkey}/device/{devicekey}/request-live: use for polling of SD live streaming (as long the user requests stream, we'll send JPEGs over).
//
// HD Streaming (WebRTC)
// - kerberos/register: use for receiving HD live streaming requests.
// - candidate/cloud: remote ICE candidates are shared over this line.
// - kerberos/webrtc/keepalivehub/{devicekey}: use for polling of HD streaming (as long the user requests stream, we'll send it over).
// - kerberos/webrtc/peers/{devicekey}: we'll keep track of the number of peers (we can have more than 1 concurrent listeners).
//
// ONVIF capabilities
// - kerberos/onvif/{devicekey}: endpoint to execute ONVIF commands such as (PTZ, Zoom, IO, etc)
//
// [PUBlISH]
// Next to subscribing to various topics, we'll also publish messages to various topics, find a list of available Publish methods.
//
// - kerberos/webrtc/packets/{devicekey}: use for forwarding WebRTC (RTP Packets) over MQTT -> Complex firewall.
// - kerberos/webrtc/keepalive/{devicekey}: use for keeping alive forwarded WebRTC stream
// - {devicekey}/{sessionid}/answer: once a WebRTC request is received through (kerberos/register), we'll draft an answer and send it back to the remote WebRTC client.
// - kerberos/{hubkey}/device/{devicekey}/motion: a motion signal
func ConfigureMQTT(configuration *models.Configuration, communication *models.Communication) mqtt.Client {
func ConfigureMQTT(configDirectory string, configuration *models.Configuration, communication *models.Communication) mqtt.Client {
config := configuration.Config
@@ -110,7 +66,7 @@ func ConfigureMQTT(configuration *models.Configuration, communication *models.Co
PREV_AgentKey = configuration.Config.Key
if config.Offline == "true" {
log.Log.Info("ConfigureMQTT: not starting as running in Offline mode.")
log.Log.Info("routers.mqtt.main.ConfigureMQTT(): not starting as running in Offline mode.")
} else {
opts := mqtt.NewClientOptions()
@@ -119,7 +75,7 @@ func ConfigureMQTT(configuration *models.Configuration, communication *models.Co
// and share and receive messages to/from.
mqttURL := config.MQTTURI
opts.AddBroker(mqttURL)
log.Log.Info("ConfigureMQTT: Set broker uri " + mqttURL)
log.Log.Debug("routers.mqtt.main.ConfigureMQTT(): Set broker uri " + mqttURL)
// Our MQTT broker can have username/password credentials
// to protect it from the outside.
@@ -128,8 +84,8 @@ func ConfigureMQTT(configuration *models.Configuration, communication *models.Co
if mqtt_username != "" || mqtt_password != "" {
opts.SetUsername(mqtt_username)
opts.SetPassword(mqtt_password)
log.Log.Info("ConfigureMQTT: Set username " + mqtt_username)
log.Log.Info("ConfigureMQTT: Set password " + mqtt_password)
log.Log.Debug("routers.mqtt.main.ConfigureMQTT(): Set username " + mqtt_username)
log.Log.Debug("routers.mqtt.main.ConfigureMQTT(): Set password " + mqtt_password)
}
// Some extra options to make sure the connection behaves
@@ -165,40 +121,21 @@ func ConfigureMQTT(configuration *models.Configuration, communication *models.Co
}
opts.SetClientID(mqttClientID)
log.Log.Info("ConfigureMQTT: Set ClientID " + mqttClientID)
log.Log.Info("routers.mqtt.main.ConfigureMQTT(): Set ClientID " + mqttClientID)
rand.Seed(time.Now().UnixNano())
webrtc.CandidateArrays = make(map[string](chan string))
opts.OnConnect = func(c mqtt.Client) {
// We managed to connect to the MQTT broker, hurray!
log.Log.Info("ConfigureMQTT: " + mqttClientID + " connected to " + mqttURL)
log.Log.Info("routers.mqtt.main.ConfigureMQTT(): " + mqttClientID + " connected to " + mqttURL)
// Create a susbcription for listen and reply
MQTTListenerHandler(c, hubKey, configuration, communication)
// Create a subscription to know if send out a livestream or not.
MQTTListenerHandleLiveSD(c, hubKey, configuration, communication)
// Create a subscription for the WEBRTC livestream.
MQTTListenerHandleLiveHDHandshake(c, hubKey, configuration, communication)
// Create a subscription for keeping alive the WEBRTC livestream.
MQTTListenerHandleLiveHDKeepalive(c, hubKey, configuration, communication)
// Create a subscription to listen to the number of WEBRTC peers.
MQTTListenerHandleLiveHDPeers(c, hubKey, configuration, communication)
// Create a subscription to listen for WEBRTC candidates.
MQTTListenerHandleLiveHDCandidates(c, hubKey, configuration, communication)
// Create a susbcription to listen for ONVIF actions: e.g. PTZ, Zoom, etc.
MQTTListenerHandleONVIF(c, hubKey, configuration, communication)
MQTTListenerHandler(c, hubKey, configDirectory, configuration, communication)
}
}
mqc := mqtt.NewClient(opts)
if token := mqc.Connect(); token.WaitTimeout(3 * time.Second) {
if token.Error() != nil {
log.Log.Error("ConfigureMQTT: unable to establish mqtt broker connection, error was: " + token.Error().Error())
log.Log.Error("routers.mqtt.main.ConfigureMQTT(): unable to establish mqtt broker connection, error was: " + token.Error().Error())
}
}
return mqc
@@ -207,12 +144,12 @@ func ConfigureMQTT(configuration *models.Configuration, communication *models.Co
return nil
}
func MQTTListenerHandler(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
func MQTTListenerHandler(mqttClient mqtt.Client, hubKey string, configDirectory string, configuration *models.Configuration, communication *models.Communication) {
if hubKey == "" {
log.Log.Info("MQTTListenerHandler: no hub key provided, not subscribing to kerberos/hub/{hubkey}")
log.Log.Info("routers.mqtt.main.MQTTListenerHandler(): no hub key provided, not subscribing to kerberos/hub/{hubkey}")
} else {
topicOnvif := fmt.Sprintf("kerberos/agent/%s", hubKey)
mqttClient.Subscribe(topicOnvif, 1, func(c mqtt.Client, msg mqtt.Message) {
agentListener := fmt.Sprintf("kerberos/agent/%s", hubKey)
mqttClient.Subscribe(agentListener, 1, func(c mqtt.Client, msg mqtt.Message) {
// Decode the message, we are expecting following format.
// {
@@ -223,50 +160,131 @@ func MQTTListenerHandler(mqttClient mqtt.Client, hubKey string, configuration *m
// payload: Payload, "a json object which might be encrypted"
// }
var message Message
var message models.Message
json.Unmarshal(msg.Payload(), &message)
if message.Mid != "" && message.Timestamp != 0 {
// Messages might be encrypted, if so we'll
// need to decrypt them.
var payload Payload
if message.Encrypted {
// We'll find out the key we use to decrypt the message.
// TODO -> still needs to be implemented.
// Use to fingerprint to act accordingly.
// We will receive all messages from our hub, so we'll need to filter to the relevant device.
if message.Mid != "" && message.Timestamp != 0 && message.DeviceId == configuration.Config.Key {
var payload models.Payload
// Messages might be hidden, if so we'll need to decrypt them using the Kerberos Hub private key.
if message.Hidden && configuration.Config.HubEncryption == "true" {
hiddenValue := message.Payload.HiddenValue
if len(hiddenValue) > 0 {
privateKey := configuration.Config.HubPrivateKey
if privateKey != "" {
data, err := base64.StdEncoding.DecodeString(hiddenValue)
if err != nil {
return
}
visibleValue, err := encryption.AesDecrypt(data, privateKey)
if err != nil {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decrypting message: " + err.Error())
return
}
json.Unmarshal(visibleValue, &payload)
message.Payload = payload
} else {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decrypting message, no private key provided.")
}
}
}
// Messages might be end-to-end encrypted, if so we'll need to decrypt them,
// using our own keys.
if message.Encrypted && configuration.Config.Encryption != nil && configuration.Config.Encryption.Enabled == "true" {
encryptedValue := message.Payload.EncryptedValue
if len(encryptedValue) > 0 {
symmetricKey := configuration.Config.Encryption.SymmetricKey
privateKey := configuration.Config.Encryption.PrivateKey
r := strings.NewReader(privateKey)
pemBytes, _ := ioutil.ReadAll(r)
block, _ := pem.Decode(pemBytes)
if block == nil {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decoding PEM block containing private key")
return
} else {
// Parse private key
b := block.Bytes
key, err := x509.ParsePKCS8PrivateKey(b)
if err != nil {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error parsing private key: " + err.Error())
return
} else {
// Conver key to *rsa.PrivateKey
rsaKey, _ := key.(*rsa.PrivateKey)
// Get encrypted key from message, delimited by :::
encryptedKey := strings.Split(encryptedValue, ":::")[0] // encrypted with RSA
encryptedValue := strings.Split(encryptedValue, ":::")[1] // encrypted with AES
// Convert encrypted value to []byte
decryptedKey, err := encryption.DecryptWithPrivateKey(encryptedKey, rsaKey)
if decryptedKey != nil {
if string(decryptedKey) == symmetricKey {
// Decrypt value with decryptedKey
data, err := base64.StdEncoding.DecodeString(encryptedValue)
if err != nil {
return
}
decryptedValue, err := encryption.AesDecrypt(data, string(decryptedKey))
if err != nil {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decrypting message: " + err.Error())
return
}
json.Unmarshal(decryptedValue, &payload)
} else {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decrypting message, assymetric keys do not match.")
return
}
} else if err != nil {
log.Log.Error("routers.mqtt.main.MQTTListenerHandler(): error decrypting message: " + err.Error())
return
}
}
}
}
} else {
payload = message.Payload
}
// We will receive all messages from our hub, so we'll need to filter to the relevant device.
if payload.DeviceId != configuration.Config.Key {
// Not relevant for this device, so we'll ignore it.
} else {
// We'll find out which message we received, and act accordingly.
switch payload.Action {
case "record":
HandleRecording(mqttClient, hubKey, payload, configuration, communication)
case "get-ptz-position":
HandleGetPTZPosition(mqttClient, hubKey, payload, configuration, communication)
case "update-ptz-position":
HandleUpdatePTZPosition(mqttClient, hubKey, payload, configuration, communication)
}
// We'll find out which message we received, and act accordingly.
log.Log.Info("routers.mqtt.main.MQTTListenerHandler(): received message with action: " + payload.Action)
switch payload.Action {
case "record":
go HandleRecording(mqttClient, hubKey, payload, configuration, communication)
case "get-audio-backchannel":
go HandleAudio(mqttClient, hubKey, payload, configuration, communication)
case "get-ptz-position":
go HandleGetPTZPosition(mqttClient, hubKey, payload, configuration, communication)
case "update-ptz-position":
go HandleUpdatePTZPosition(mqttClient, hubKey, payload, configuration, communication)
case "navigate-ptz":
go HandleNavigatePTZ(mqttClient, hubKey, payload, configuration, communication)
case "request-config":
go HandleRequestConfig(mqttClient, hubKey, payload, configuration, communication)
case "update-config":
go HandleUpdateConfig(mqttClient, hubKey, payload, configDirectory, configuration, communication)
case "request-sd-stream":
go HandleRequestSDStream(mqttClient, hubKey, payload, configuration, communication)
case "request-hd-stream":
go HandleRequestHDStream(mqttClient, hubKey, payload, configuration, communication)
case "receive-hd-candidates":
go HandleReceiveHDCandidates(mqttClient, hubKey, payload, configuration, communication)
case "trigger-relay":
go HandleTriggerRelay(mqttClient, hubKey, payload, configuration, communication)
}
}
})
}
}
// We received a recording request, we'll send it to the motion handler.
type RecordPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the recording request.
}
func HandleRecording(mqttClient mqtt.Client, hubKey string, payload Payload, configuration *models.Configuration, communication *models.Communication) {
func HandleRecording(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to RecordPayload
jsonData, _ := json.Marshal(value)
var recordPayload RecordPayload
var recordPayload models.RecordPayload
json.Unmarshal(jsonData, &recordPayload)
if recordPayload.Timestamp != 0 {
@@ -277,29 +295,41 @@ func HandleRecording(mqttClient mqtt.Client, hubKey string, payload Payload, con
}
}
// We received a preset position request, we'll request it through onvif and send it back.
type PTZPositionPayload struct {
Timestamp int64 `json:"timestamp"` // timestamp of the preset request.
func HandleAudio(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to AudioPayload
jsonData, _ := json.Marshal(value)
var audioPayload models.AudioPayload
json.Unmarshal(jsonData, &audioPayload)
if audioPayload.Timestamp != 0 {
audioDataPartial := models.AudioDataPartial{
Timestamp: audioPayload.Timestamp,
Data: audioPayload.Data,
}
communication.HandleAudio <- audioDataPartial
}
}
func HandleGetPTZPosition(mqttClient mqtt.Client, hubKey string, payload Payload, configuration *models.Configuration, communication *models.Communication) {
func HandleGetPTZPosition(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to PTZPositionPayload
jsonData, _ := json.Marshal(value)
var positionPayload PTZPositionPayload
var positionPayload models.PTZPositionPayload
json.Unmarshal(jsonData, &positionPayload)
if positionPayload.Timestamp != 0 {
// Get Position from device
pos, err := onvif.GetPositionFromDevice(*configuration)
if err != nil {
log.Log.Error("HandlePTZPosition: error getting position from device: " + err.Error())
log.Log.Error("routers.mqtt.main.HandlePTZPosition(): error getting position from device: " + err.Error())
} else {
// Needs to wrapped!
posString := fmt.Sprintf("%f,%f,%f", pos.PanTilt.X, pos.PanTilt.Y, pos.Zoom.X)
message := Message{
Payload: Payload{
message := models.Message{
Payload: models.Payload{
Action: "ptz-position",
DeviceId: configuration.Config.Key,
Value: map[string]interface{}{
@@ -308,17 +338,17 @@ func HandleGetPTZPosition(mqttClient mqtt.Client, hubKey string, payload Payload
},
},
}
payload, err := PackageMQTTMessage(message)
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 0, false, payload)
mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
} else {
log.Log.Info("HandlePTZPosition: something went wrong while sending position to hub: " + string(payload))
log.Log.Info("routers.mqtt.main.HandlePTZPosition(): something went wrong while sending position to hub: " + string(payload))
}
}
}
}
func HandleUpdatePTZPosition(mqttClient mqtt.Client, hubKey string, payload Payload, configuration *models.Configuration, communication *models.Communication) {
func HandleUpdatePTZPosition(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to PTZPositionPayload
@@ -329,9 +359,206 @@ func HandleUpdatePTZPosition(mqttClient mqtt.Client, hubKey string, payload Payl
if onvifAction.Action != "" {
if communication.CameraConnected {
communication.HandleONVIF <- onvifAction
log.Log.Info("MQTTListenerHandleONVIF: Received an action - " + onvifAction.Action)
log.Log.Info("routers.mqtt.main.MQTTListenerHandleONVIF(): Received an action - " + onvifAction.Action)
} else {
log.Log.Info("MQTTListenerHandleONVIF: received action, but camera is not connected.")
log.Log.Info("routers.mqtt.main.MQTTListenerHandleONVIF(): received action, but camera is not connected.")
}
}
}
func HandleRequestConfig(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to RequestConfigPayload
jsonData, _ := json.Marshal(value)
var configPayload models.RequestConfigPayload
json.Unmarshal(jsonData, &configPayload)
if configPayload.Timestamp != 0 {
// Get Config from the device
key := configuration.Config.Key
name := configuration.Config.Name
if configuration.Config.FriendlyName != "" {
name = configuration.Config.FriendlyName
}
if key != "" && name != "" {
// Copy the config, as we don't want to share the encryption part.
deepCopy := configuration.Config
var configMap map[string]interface{}
inrec, _ := json.Marshal(deepCopy)
json.Unmarshal(inrec, &configMap)
// Unset encryption part.
delete(configMap, "encryption")
message := models.Message{
Payload: models.Payload{
Action: "receive-config",
DeviceId: configuration.Config.Key,
Value: configMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
} else {
log.Log.Info("routers.mqtt.main.HandleRequestConfig(): something went wrong while sending config to hub: " + string(payload))
}
} else {
log.Log.Info("routers.mqtt.main.HandleRequestConfig(): no config available")
}
log.Log.Info("routers.mqtt.main.HandleRequestConfig(): Received a request for the config")
}
}
func HandleUpdateConfig(mqttClient mqtt.Client, hubKey string, payload models.Payload, configDirectory string, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to UpdateConfigPayload
jsonData, _ := json.Marshal(value)
var configPayload models.UpdateConfigPayload
json.Unmarshal(jsonData, &configPayload)
if configPayload.Timestamp != 0 {
config := configPayload.Config
// Make sure to remove Encryption part, as we don't want to save it.
config.Encryption = configuration.Config.Encryption
err := configService.SaveConfig(configDirectory, config, configuration, communication)
if err == nil {
log.Log.Info("routers.mqtt.main.HandleUpdateConfig(): Config updated")
message := models.Message{
Payload: models.Payload{
Action: "acknowledge-update-config",
DeviceId: configuration.Config.Key,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
} else {
log.Log.Info("routers.mqtt.main.HandleUpdateConfig(): something went wrong while sending acknowledge config to hub: " + string(payload))
}
} else {
log.Log.Info("routers.mqtt.main.HandleUpdateConfig(): Config update failed")
}
}
}
func HandleRequestSDStream(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to RequestSDStreamPayload
jsonData, _ := json.Marshal(value)
var requestSDStreamPayload models.RequestSDStreamPayload
json.Unmarshal(jsonData, &requestSDStreamPayload)
if requestSDStreamPayload.Timestamp != 0 {
if communication.CameraConnected {
select {
case communication.HandleLiveSD <- time.Now().Unix():
default:
}
log.Log.Info("routers.mqtt.main.HandleRequestSDStream(): received request to livestream.")
} else {
log.Log.Info("routers.mqtt.main.HandleRequestSDStream(): received request to livestream, but camera is not connected.")
}
}
}
func HandleRequestHDStream(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to RequestHDStreamPayload
jsonData, _ := json.Marshal(value)
var requestHDStreamPayload models.RequestHDStreamPayload
json.Unmarshal(jsonData, &requestHDStreamPayload)
if requestHDStreamPayload.Timestamp != 0 {
if communication.CameraConnected {
// Set the Hub key, so we can send back the answer.
requestHDStreamPayload.HubKey = hubKey
select {
case communication.HandleLiveHDHandshake <- requestHDStreamPayload:
default:
}
log.Log.Info("routers.mqtt.main.HandleRequestHDStream(): received request to setup webrtc.")
} else {
log.Log.Info("routers.mqtt.main.HandleRequestHDStream(): received request to setup webrtc, but camera is not connected.")
}
}
}
func HandleReceiveHDCandidates(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
// Convert map[string]interface{} to ReceiveHDCandidatesPayload
jsonData, _ := json.Marshal(value)
var receiveHDCandidatesPayload models.ReceiveHDCandidatesPayload
json.Unmarshal(jsonData, &receiveHDCandidatesPayload)
if receiveHDCandidatesPayload.Timestamp != 0 {
if communication.CameraConnected {
// Register candidate channel
key := configuration.Config.Key + "/" + receiveHDCandidatesPayload.SessionID
go webrtc.RegisterCandidates(key, receiveHDCandidatesPayload)
} else {
log.Log.Info("routers.mqtt.main.HandleReceiveHDCandidates(): received candidate, but camera is not connected.")
}
}
}
func HandleNavigatePTZ(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
jsonData, _ := json.Marshal(value)
var navigatePTZPayload models.NavigatePTZPayload
json.Unmarshal(jsonData, &navigatePTZPayload)
if navigatePTZPayload.Timestamp != 0 {
if communication.CameraConnected {
action := navigatePTZPayload.Action
var onvifAction models.OnvifAction
json.Unmarshal([]byte(action), &onvifAction)
communication.HandleONVIF <- onvifAction
log.Log.Info("routers.mqtt.main.HandleNavigatePTZ(): Received an action - " + onvifAction.Action)
} else {
log.Log.Info("routers.mqtt.main.HandleNavigatePTZ(): received action, but camera is not connected.")
}
}
}
func HandleTriggerRelay(mqttClient mqtt.Client, hubKey string, payload models.Payload, configuration *models.Configuration, communication *models.Communication) {
value := payload.Value
jsonData, _ := json.Marshal(value)
var triggerRelayPayload models.TriggerRelay
json.Unmarshal(jsonData, &triggerRelayPayload)
if triggerRelayPayload.Timestamp != 0 {
if communication.CameraConnected {
// Get token (name of relay)
token := triggerRelayPayload.Token
// Connect to Onvif device
cameraConfiguration := configuration.Config.Capture.IPCamera
device, _, err := onvif.ConnectToOnvifDevice(&cameraConfiguration)
if err == nil {
// Trigger relay output
err := onvif.TriggerRelayOutput(device, token)
if err != nil {
log.Log.Error("routers.mqtt.main.HandleTriggerRelay(): error triggering relay: " + err.Error())
} else {
log.Log.Info("routers.mqtt.main.HandleTriggerRelay(): trigger (" + token + ") relay output.")
}
} else {
log.Log.Error("routers.mqtt.main.HandleTriggerRelay(): error connecting to device: " + err.Error())
}
} else {
log.Log.Info("routers.mqtt.main.HandleTriggerRelay(): received trigger, but camera is not connected.")
}
}
}
@@ -339,127 +566,10 @@ func HandleUpdatePTZPosition(mqttClient mqtt.Client, hubKey string, payload Payl
func DisconnectMQTT(mqttClient mqtt.Client, config *models.Config) {
if mqttClient != nil {
// Cleanup all subscriptions
// New methods
mqttClient.Unsubscribe("kerberos/agent/" + PREV_HubKey)
// Legacy methods
mqttClient.Unsubscribe("kerberos/" + PREV_HubKey + "/device/" + PREV_AgentKey + "/request-live")
mqttClient.Unsubscribe(PREV_AgentKey + "/register")
mqttClient.Unsubscribe("kerberos/webrtc/keepalivehub/" + PREV_AgentKey)
mqttClient.Unsubscribe("kerberos/webrtc/peers/" + PREV_AgentKey)
mqttClient.Unsubscribe("candidate/cloud")
mqttClient.Unsubscribe("kerberos/onvif/" + PREV_AgentKey)
mqttClient.Disconnect(1000)
mqttClient = nil
log.Log.Info("DisconnectMQTT: MQTT client disconnected.")
log.Log.Info("routers.mqtt.main.DisconnectMQTT(): MQTT client disconnected.")
}
}
// #################################################################################################
// Below you'll find legacy methods, as of now we'll have a single subscription, which scales better
func MQTTListenerHandleLiveSD(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicRequest := "kerberos/" + hubKey + "/device/" + config.Key + "/request-live"
mqttClient.Subscribe(topicRequest, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
select {
case communication.HandleLiveSD <- time.Now().Unix():
default:
}
log.Log.Info("MQTTListenerHandleLiveSD: received request to livestream.")
} else {
log.Log.Info("MQTTListenerHandleLiveSD: received request to livestream, but camera is not connected.")
}
msg.Ack()
})
}
func MQTTListenerHandleLiveHDHandshake(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicRequestWebRtc := config.Key + "/register"
mqttClient.Subscribe(topicRequestWebRtc, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
var sdp models.SDPPayload
json.Unmarshal(msg.Payload(), &sdp)
select {
case communication.HandleLiveHDHandshake <- sdp:
default:
}
log.Log.Info("MQTTListenerHandleLiveHDHandshake: received request to setup webrtc.")
} else {
log.Log.Info("MQTTListenerHandleLiveHDHandshake: received request to setup webrtc, but camera is not connected.")
}
msg.Ack()
})
}
func MQTTListenerHandleLiveHDKeepalive(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicKeepAlive := fmt.Sprintf("kerberos/webrtc/keepalivehub/%s", config.Key)
mqttClient.Subscribe(topicKeepAlive, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
alive := string(msg.Payload())
communication.HandleLiveHDKeepalive <- alive
log.Log.Info("MQTTListenerHandleLiveHDKeepalive: Received keepalive: " + alive)
} else {
log.Log.Info("MQTTListenerHandleLiveHDKeepalive: received keepalive, but camera is not connected.")
}
})
}
func MQTTListenerHandleLiveHDPeers(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicPeers := fmt.Sprintf("kerberos/webrtc/peers/%s", config.Key)
mqttClient.Subscribe(topicPeers, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
peerCount := string(msg.Payload())
communication.HandleLiveHDPeers <- peerCount
log.Log.Info("MQTTListenerHandleLiveHDPeers: Number of peers listening: " + peerCount)
} else {
log.Log.Info("MQTTListenerHandleLiveHDPeers: received peer count, but camera is not connected.")
}
})
}
func MQTTListenerHandleLiveHDCandidates(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicCandidates := "candidate/cloud"
mqttClient.Subscribe(topicCandidates, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
var candidate models.Candidate
json.Unmarshal(msg.Payload(), &candidate)
if candidate.CloudKey == config.Key {
key := candidate.CloudKey + "/" + candidate.Cuuid
candidatesExists := false
var channel chan string
for !candidatesExists {
webrtc.CandidatesMutex.Lock()
channel, candidatesExists = webrtc.CandidateArrays[key]
webrtc.CandidatesMutex.Unlock()
}
log.Log.Info("MQTTListenerHandleLiveHDCandidates: " + string(msg.Payload()))
channel <- string(msg.Payload())
}
} else {
log.Log.Info("MQTTListenerHandleLiveHDCandidates: received candidate, but camera is not connected.")
}
})
}
func MQTTListenerHandleONVIF(mqttClient mqtt.Client, hubKey string, configuration *models.Configuration, communication *models.Communication) {
config := configuration.Config
topicOnvif := fmt.Sprintf("kerberos/onvif/%s", config.Key)
mqttClient.Subscribe(topicOnvif, 0, func(c mqtt.Client, msg mqtt.Message) {
if communication.CameraConnected {
var onvifAction models.OnvifAction
json.Unmarshal(msg.Payload(), &onvifAction)
communication.HandleONVIF <- onvifAction
log.Log.Info("MQTTListenerHandleONVIF: Received an action - " + onvifAction.Action)
} else {
log.Log.Info("MQTTListenerHandleONVIF: received action, but camera is not connected.")
}
})
}

View File

@@ -3,15 +3,17 @@ package websocket
import (
"context"
"encoding/base64"
"image"
"net/http"
"sync"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/kerberos-io/agent/machinery/src/computervision"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/joy4/cgo/ffmpeg"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/kerberos-io/agent/machinery/src/utils"
)
type Message struct {
@@ -47,7 +49,7 @@ var upgrader = websocket.Upgrader{
},
}
func WebsocketHandler(c *gin.Context, communication *models.Communication) {
func WebsocketHandler(c *gin.Context, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
w := c.Writer
r := c.Request
conn, err := upgrader.Upgrade(w, r, nil)
@@ -58,12 +60,17 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication) {
var message Message
err = conn.ReadJSON(&message)
if err != nil {
log.Log.Error("routers.websocket.main.WebsocketHandler(): " + err.Error())
return
}
clientID := message.ClientID
if sockets[clientID] == nil {
connection := new(Connection)
connection.Socket = conn
sockets[clientID] = connection
sockets[clientID].Cancels = make(map[string]context.CancelFunc)
log.Log.Info("routers.websocket.main.WebsocketHandler(): " + clientID + ": connected.")
}
// Continuously read messages
@@ -85,14 +92,14 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication) {
if exists {
sockets[clientID].Cancels["stream-sd"]()
} else {
log.Log.Error("Streaming sd does not exists for " + clientID)
log.Log.Error("routers.websocket.main.WebsocketHandler(): streaming sd does not exists for " + clientID)
}
case "stream-sd":
if communication.CameraConnected {
_, exists := sockets[clientID].Cancels["stream-sd"]
if exists {
log.Log.Info("Already streaming sd for " + clientID)
log.Log.Debug("routers.websocket.main.WebsocketHandler(): already streaming sd for " + clientID)
} else {
startStream := Message{
ClientID: clientID,
@@ -105,7 +112,7 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication) {
ctx, cancel := context.WithCancel(context.Background())
sockets[clientID].Cancels["stream-sd"] = cancel
go ForwardSDStream(ctx, clientID, sockets[clientID], communication)
go ForwardSDStream(ctx, clientID, sockets[clientID], configuration, communication, captureDevice)
}
}
}
@@ -119,37 +126,49 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication) {
_, exists := sockets[clientID]
if exists {
delete(sockets, clientID)
log.Log.Info("WebsocketHandler: " + clientID + ": terminated and disconnected websocket connection.")
log.Log.Info("routers.websocket.main.WebsocketHandler(): " + clientID + ": terminated and disconnected websocket connection.")
}
}
}
func ForwardSDStream(ctx context.Context, clientID string, connection *Connection, communication *models.Communication) {
func ForwardSDStream(ctx context.Context, clientID string, connection *Connection, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
queue := communication.Queue
cursor := queue.Latest()
decoder := communication.Decoder
decoderMutex := communication.DecoderMutex
var queue *packets.Queue
var cursor *packets.QueueCursor
// Allocate ffmpeg.VideoFrame
frame := ffmpeg.AllocVideoFrame()
// We'll pick the right client and decoder.
rtspClient := captureDevice.RTSPSubClient
if rtspClient != nil {
queue = communication.SubQueue
cursor = queue.Latest()
} else {
rtspClient = captureDevice.RTSPClient
queue = communication.Queue
cursor = queue.Latest()
}
logreader:
for {
var encodedImage string
if queue != nil && cursor != nil && decoder != nil {
if queue != nil && cursor != nil && rtspClient != nil {
pkt, err := cursor.ReadPacket()
if err == nil {
if !pkt.IsKeyFrame {
continue
}
img, err := computervision.GetRawImage(frame, pkt, decoder, decoderMutex)
var img image.YCbCr
img, err = (*rtspClient).DecodePacket(pkt)
if err == nil {
bytes, _ := computervision.ImageToBytes(&img.Image)
config := configuration.Config
// Resize the image to the base width and height
imageResized, _ := utils.ResizeImage(&img, uint(config.Capture.IPCamera.BaseWidth), uint(config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
encodedImage = base64.StdEncoding.EncodeToString(bytes)
} else {
continue
}
} else {
log.Log.Error("ForwardSDStream:" + err.Error())
log.Log.Error("routers.websocket.main.ForwardSDStream():" + err.Error())
break logreader
}
}
@@ -163,7 +182,7 @@ logreader:
}
err := connection.WriteJson(startStrean)
if err != nil {
log.Log.Error("ForwardSDStream:" + err.Error())
log.Log.Error("routers.websocket.main.ForwardSDStream():" + err.Error())
break logreader
}
select {
@@ -173,16 +192,14 @@ logreader:
}
}
frame.Free()
// Close socket for streaming
_, exists := connection.Cancels["stream-sd"]
if exists {
delete(connection.Cancels, "stream-sd")
} else {
log.Log.Error("Streaming sd does not exists for " + clientID)
log.Log.Error("routers.websocket.main.ForwardSDStream(): streaming sd does not exists for " + clientID)
}
// Send stop streaming message
log.Log.Info("ForwardSDStream: stop sending streaming over websocket")
log.Log.Info("routers.websocket.main.ForwardSDStream(): stop sending streaming over websocket")
}

View File

@@ -1,247 +0,0 @@
package rtsp
import (
"fmt"
"image"
"image/jpeg"
"log"
"os"
"strconv"
"time"
"github.com/bluenviron/gortsplib/v3"
"github.com/bluenviron/gortsplib/v3/pkg/base"
"github.com/bluenviron/gortsplib/v3/pkg/formats"
"github.com/bluenviron/gortsplib/v3/pkg/formats/rtph265"
"github.com/bluenviron/gortsplib/v3/pkg/url"
"github.com/bluenviron/mediacommon/pkg/codecs/h264"
"github.com/pion/rtp"
)
func CreateClient() {
c := &gortsplib.Client{
OnRequest: func(req *base.Request) {
//log.Log.Info(logger.Debug, "c->s %v", req)
},
OnResponse: func(res *base.Response) {
//s.Log(logger.Debug, "s->c %v", res)
},
OnTransportSwitch: func(err error) {
//s.Log(logger.Warn, err.Error())
},
OnPacketLost: func(err error) {
//s.Log(logger.Warn, err.Error())
},
OnDecodeError: func(err error) {
//s.Log(logger.Warn, err.Error())
},
}
u, err := url.Parse("rtsp://admin:admin@192.168.1.111") //"rtsp://seing:bud-edPTQc@109.159.199.103:554/rtsp/defaultPrimary?mtu=1440&streamType=m") //
if err != nil {
panic(err)
}
err = c.Start(u.Scheme, u.Host)
if err != nil {
//return err
}
defer c.Close()
medias, baseURL, _, err := c.Describe(u)
if err != nil {
//return err
}
fmt.Println(medias)
// find the H264 media and format
var forma *formats.H265
medi := medias.FindFormat(&forma)
if medi == nil {
panic("media not found")
}
// setup RTP/H264 -> H264 decoder
rtpDec := forma.CreateDecoder()
// setup H264 -> MPEG-TS muxer
//pegtsMuxer, err := newMPEGTSMuxer(forma.SPS, forma.PPS)
if err != nil {
panic(err)
}
// setup H264 -> raw frames decoder
/*h264RawDec, err := newH264Decoder()
if err != nil {
panic(err)
}
defer h264RawDec.close()
// if SPS and PPS are present into the SDP, send them to the decoder
if forma.SPS != nil {
h264RawDec.decode(forma.SPS)
}
if forma.PPS != nil {
h264RawDec.decode(forma.PPS)
}*/
readErr := make(chan error)
go func() {
readErr <- func() error {
// Get codecs
for _, medi := range medias {
for _, forma := range medi.Formats {
fmt.Println(forma)
}
}
err = c.SetupAll(medias, baseURL)
if err != nil {
return err
}
for _, medi := range medias {
for _, forma := range medi.Formats {
c.OnPacketRTP(medi, forma, func(pkt *rtp.Packet) {
au, pts, err := rtpDec.Decode(pkt)
if err != nil {
if err != rtph265.ErrNonStartingPacketAndNoPrevious && err != rtph265.ErrMorePacketsNeeded {
log.Printf("ERR: %v", err)
}
return
}
for _, nalu := range au {
log.Printf("received NALU with PTS %v and size %d\n", pts, len(nalu))
}
/*// extract access unit from RTP packets
// DecodeUntilMarker is necessary for the DTS extractor to work
if pkt.PayloadType == 96 {
au, pts, err := rtpDec.DecodeUntilMarker(pkt)
if err != nil {
if err != rtph264.ErrNonStartingPacketAndNoPrevious && err != rtph264.ErrMorePacketsNeeded {
log.Printf("ERR: %v", err)
}
return
}
// encode the access unit into MPEG-TS
mpegtsMuxer.encode(au, pts)
for _, nalu := range au {
// convert NALUs into RGBA frames
img, err := h264RawDec.decode(nalu)
if err != nil {
panic(err)
}
// wait for a frame
if img == nil {
continue
}
// convert frame to JPEG and save to file
err = saveToFile(img)
if err != nil {
panic(err)
}
}
}*/
})
}
}
_, err = c.Play(nil)
if err != nil {
return err
}
return c.Wait()
}()
}()
for {
select {
case err := <-readErr:
fmt.Println(err)
}
}
}
func saveToFile(img image.Image) error {
// create file
fname := strconv.FormatInt(time.Now().UnixNano()/int64(time.Millisecond), 10) + ".jpg"
f, err := os.Create(fname)
if err != nil {
panic(err)
}
defer f.Close()
log.Println("saving", fname)
// convert to jpeg
return jpeg.Encode(f, img, &jpeg.Options{
Quality: 60,
})
}
// extract SPS and PPS without decoding RTP packets
func rtpH264ExtractSPSPPS(pkt *rtp.Packet) ([]byte, []byte) {
if len(pkt.Payload) < 1 {
return nil, nil
}
typ := h264.NALUType(pkt.Payload[0] & 0x1F)
switch typ {
case h264.NALUTypeSPS:
return pkt.Payload, nil
case h264.NALUTypePPS:
return nil, pkt.Payload
case h264.NALUTypeSTAPA:
payload := pkt.Payload[1:]
var sps []byte
var pps []byte
for len(payload) > 0 {
if len(payload) < 2 {
break
}
size := uint16(payload[0])<<8 | uint16(payload[1])
payload = payload[2:]
if size == 0 {
break
}
if int(size) > len(payload) {
return nil, nil
}
nalu := payload[:size]
payload = payload[size:]
typ = h264.NALUType(nalu[0] & 0x1F)
switch typ {
case h264.NALUTypeSPS:
sps = nalu
case h264.NALUTypePPS:
pps = nalu
}
}
return sps, pps
default:
return nil, nil
}
}

View File

@@ -1,140 +0,0 @@
package rtsp
import (
"fmt"
"image"
"unsafe"
)
// #cgo pkg-config: libavcodec libavutil libswscale
// #include <libavcodec/avcodec.h>
// #include <libavutil/imgutils.h>
// #include <libswscale/swscale.h>
import "C"
func frameData(frame *C.AVFrame) **C.uint8_t {
return (**C.uint8_t)(unsafe.Pointer(&frame.data[0]))
}
func frameLineSize(frame *C.AVFrame) *C.int {
return (*C.int)(unsafe.Pointer(&frame.linesize[0]))
}
// h264Decoder is a wrapper around ffmpeg's H264 decoder.
type h264Decoder struct {
codecCtx *C.AVCodecContext
srcFrame *C.AVFrame
swsCtx *C.struct_SwsContext
dstFrame *C.AVFrame
dstFramePtr []uint8
}
// newH264Decoder allocates a new h264Decoder.
func newH264Decoder() (*h264Decoder, error) {
codec := C.avcodec_find_decoder(C.AV_CODEC_ID_H264)
if codec == nil {
return nil, fmt.Errorf("avcodec_find_decoder() failed")
}
codecCtx := C.avcodec_alloc_context3(codec)
if codecCtx == nil {
return nil, fmt.Errorf("avcodec_alloc_context3() failed")
}
res := C.avcodec_open2(codecCtx, codec, nil)
if res < 0 {
C.avcodec_close(codecCtx)
return nil, fmt.Errorf("avcodec_open2() failed")
}
srcFrame := C.av_frame_alloc()
if srcFrame == nil {
C.avcodec_close(codecCtx)
return nil, fmt.Errorf("av_frame_alloc() failed")
}
return &h264Decoder{
codecCtx: codecCtx,
srcFrame: srcFrame,
}, nil
}
// close closes the decoder.
func (d *h264Decoder) close() {
if d.dstFrame != nil {
C.av_frame_free(&d.dstFrame)
}
if d.swsCtx != nil {
C.sws_freeContext(d.swsCtx)
}
C.av_frame_free(&d.srcFrame)
C.avcodec_close(d.codecCtx)
}
func (d *h264Decoder) decode(nalu []byte) (image.Image, error) {
nalu = append([]uint8{0x00, 0x00, 0x00, 0x01}, []uint8(nalu)...)
// send frame to decoder
var avPacket C.AVPacket
avPacket.data = (*C.uint8_t)(C.CBytes(nalu))
defer C.free(unsafe.Pointer(avPacket.data))
avPacket.size = C.int(len(nalu))
res := C.avcodec_send_packet(d.codecCtx, &avPacket)
if res < 0 {
return nil, nil
}
// receive frame if available
res = C.avcodec_receive_frame(d.codecCtx, d.srcFrame)
if res < 0 {
return nil, nil
}
// if frame size has changed, allocate needed objects
if d.dstFrame == nil || d.dstFrame.width != d.srcFrame.width || d.dstFrame.height != d.srcFrame.height {
if d.dstFrame != nil {
C.av_frame_free(&d.dstFrame)
}
if d.swsCtx != nil {
C.sws_freeContext(d.swsCtx)
}
d.dstFrame = C.av_frame_alloc()
d.dstFrame.format = C.AV_PIX_FMT_RGBA
d.dstFrame.width = d.srcFrame.width
d.dstFrame.height = d.srcFrame.height
d.dstFrame.color_range = C.AVCOL_RANGE_JPEG
res = C.av_frame_get_buffer(d.dstFrame, 1)
if res < 0 {
return nil, fmt.Errorf("av_frame_get_buffer() err")
}
d.swsCtx = C.sws_getContext(d.srcFrame.width, d.srcFrame.height, C.AV_PIX_FMT_YUV420P,
d.dstFrame.width, d.dstFrame.height, (int32)(d.dstFrame.format), C.SWS_BILINEAR, nil, nil, nil)
if d.swsCtx == nil {
return nil, fmt.Errorf("sws_getContext() err")
}
dstFrameSize := C.av_image_get_buffer_size((int32)(d.dstFrame.format), d.dstFrame.width, d.dstFrame.height, 1)
d.dstFramePtr = (*[1 << 30]uint8)(unsafe.Pointer(d.dstFrame.data[0]))[:dstFrameSize:dstFrameSize]
}
// convert frame from YUV420 to RGB
res = C.sws_scale(d.swsCtx, frameData(d.srcFrame), frameLineSize(d.srcFrame),
0, d.srcFrame.height, frameData(d.dstFrame), frameLineSize(d.dstFrame))
if res < 0 {
return nil, fmt.Errorf("sws_scale() err")
}
// embed frame into an image.Image
return &image.RGBA{
Pix: d.dstFramePtr,
Stride: 4 * (int)(d.dstFrame.width),
Rect: image.Rectangle{
Max: image.Point{(int)(d.dstFrame.width), (int)(d.dstFrame.height)},
},
}, nil
}

View File

@@ -1,15 +0,0 @@
package rtsp
// mp4Muxer allows to save a H264 stream into a Mp4 file.
type mp4Muxer struct {
sps []byte
pps []byte
}
// newMp4Muxer allocates a mp4Muxer.
func newMp4Muxer(sps []byte, pps []byte) (*mp4Muxer, error) {
return &mp4Muxer{
sps: sps,
pps: pps,
}, nil
}

View File

@@ -1,173 +0,0 @@
package rtsp
import (
"bufio"
"context"
"log"
"os"
"time"
"github.com/asticode/go-astits"
"github.com/bluenviron/mediacommon/pkg/codecs/h264"
)
// mpegtsMuxer allows to save a H264 stream into a MPEG-TS file.
type mpegtsMuxer struct {
sps []byte
pps []byte
f *os.File
b *bufio.Writer
mux *astits.Muxer
dtsExtractor *h264.DTSExtractor
firstIDRReceived bool
startDTS time.Duration
}
// newMPEGTSMuxer allocates a mpegtsMuxer.
func newMPEGTSMuxer(sps []byte, pps []byte) (*mpegtsMuxer, error) {
f, err := os.Create("mystream.ts")
if err != nil {
return nil, err
}
b := bufio.NewWriter(f)
mux := astits.NewMuxer(context.Background(), b)
mux.AddElementaryStream(astits.PMTElementaryStream{
ElementaryPID: 256,
StreamType: astits.StreamTypeH264Video,
})
mux.SetPCRPID(256)
return &mpegtsMuxer{
sps: sps,
pps: pps,
f: f,
b: b,
mux: mux,
}, nil
}
// close closes all the mpegtsMuxer resources.
func (e *mpegtsMuxer) close() {
e.b.Flush()
e.f.Close()
}
// encode encodes a H264 access unit into MPEG-TS.
func (e *mpegtsMuxer) encode(au [][]byte, pts time.Duration) error {
// prepend an AUD. This is required by some players
filteredNALUs := [][]byte{
{byte(h264.NALUTypeAccessUnitDelimiter), 240},
}
nonIDRPresent := false
idrPresent := false
for _, nalu := range au {
typ := h264.NALUType(nalu[0] & 0x1F)
switch typ {
case h264.NALUTypeSPS:
e.sps = append([]byte(nil), nalu...)
continue
case h264.NALUTypePPS:
e.pps = append([]byte(nil), nalu...)
continue
case h264.NALUTypeAccessUnitDelimiter:
continue
case h264.NALUTypeIDR:
idrPresent = true
case h264.NALUTypeNonIDR:
nonIDRPresent = true
}
filteredNALUs = append(filteredNALUs, nalu)
}
au = filteredNALUs
if !nonIDRPresent && !idrPresent {
return nil
}
// add SPS and PPS before every group that contains an IDR
if idrPresent {
au = append([][]byte{e.sps, e.pps}, au...)
}
var dts time.Duration
if !e.firstIDRReceived {
// skip samples silently until we find one with a IDR
if !idrPresent {
return nil
}
e.firstIDRReceived = true
e.dtsExtractor = h264.NewDTSExtractor()
var err error
dts, err = e.dtsExtractor.Extract(au, pts)
if err != nil {
return err
}
e.startDTS = dts
dts = 0
pts -= e.startDTS
} else {
var err error
dts, err = e.dtsExtractor.Extract(au, pts)
if err != nil {
return err
}
dts -= e.startDTS
pts -= e.startDTS
}
oh := &astits.PESOptionalHeader{
MarkerBits: 2,
}
if dts == pts {
oh.PTSDTSIndicator = astits.PTSDTSIndicatorOnlyPTS
oh.PTS = &astits.ClockReference{Base: int64(pts.Seconds() * 90000)}
} else {
oh.PTSDTSIndicator = astits.PTSDTSIndicatorBothPresent
oh.DTS = &astits.ClockReference{Base: int64(dts.Seconds() * 90000)}
oh.PTS = &astits.ClockReference{Base: int64(pts.Seconds() * 90000)}
}
// encode into Annex-B
annexb, err := h264.AnnexBMarshal(au)
if err != nil {
return err
}
// write TS packet
_, err = e.mux.WriteData(&astits.MuxerData{
PID: 256,
AdaptationField: &astits.PacketAdaptationField{
RandomAccessIndicator: idrPresent,
},
PES: &astits.PESData{
Header: &astits.PESHeader{
OptionalHeader: oh,
StreamID: 224, // video
},
Data: annexb,
},
})
if err != nil {
return err
}
log.Println("wrote TS packet")
return nil
}

View File

@@ -1,9 +1,12 @@
package utils
import (
"bufio"
"bytes"
"errors"
"fmt"
"image"
"image/jpeg"
"io/ioutil"
"math/rand"
"os"
@@ -15,10 +18,18 @@ import (
"strings"
"time"
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/nfnt/resize"
)
// VERSION is the agent version. It defaults to "0.0.0" for local dev builds
// and is overridden at build time via:
// go build -ldflags "-X github.com/kerberos-io/agent/machinery/src/utils.VERSION=v1.2.3"
var VERSION = "0.0.0"
const letterBytes = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
// MaxUint8 - maximum value which can be held in an uint8
@@ -330,3 +341,96 @@ func PrintConfiguration(configuration *models.Configuration) {
}
log.Log.Info("Printing our configuration (config.json): " + configurationVariables)
}
func Decrypt(directoryOrFile string, symmetricKey []byte) {
// Check if file or directory
fileInfo, err := os.Stat(directoryOrFile)
if err != nil {
log.Log.Fatal(err.Error())
return
}
var files []string
if fileInfo.IsDir() {
// Create decrypted directory
err = os.MkdirAll(directoryOrFile+"/decrypted", 0755)
if err != nil {
log.Log.Fatal(err.Error())
return
}
dir, err := os.ReadDir(directoryOrFile)
if err != nil {
log.Log.Fatal(err.Error())
return
}
for _, file := range dir {
// Check if file is not a directory
if !file.IsDir() {
// Check if an mp4 file
if strings.HasSuffix(file.Name(), ".mp4") {
files = append(files, directoryOrFile+"/"+file.Name())
}
}
}
} else {
files = append(files, directoryOrFile)
}
// We'll loop over all files and decrypt them one by one.
for _, file := range files {
// Read file
content, err := os.ReadFile(file)
if err != nil {
log.Log.Fatal(err.Error())
return
}
// Decrypt using AES key
decrypted, err := encryption.AesDecrypt(content, string(symmetricKey))
if err != nil {
log.Log.Fatal("Something went wrong while decrypting: " + err.Error())
return
}
// Write decrypted content to file with appended .decrypted
// Get filename split by / and get last element.
fileParts := strings.Split(file, "/")
fileName := fileParts[len(fileParts)-1]
pathToFile := strings.Join(fileParts[:len(fileParts)-1], "/")
err = os.WriteFile(pathToFile+"/decrypted/"+fileName, []byte(decrypted), 0644)
if err != nil {
log.Log.Fatal(err.Error())
return
}
}
}
func ImageToBytes(img *image.Image) ([]byte, error) {
buffer := new(bytes.Buffer)
w := bufio.NewWriter(buffer)
err := jpeg.Encode(w, *img, &jpeg.Options{Quality: 35})
log.Log.Debug("ImageToBytes() - buffer size: " + strconv.Itoa(buffer.Len()))
return buffer.Bytes(), err
}
func ResizeImage(img image.Image, newWidth uint, newHeight uint) (*image.Image, error) {
if img == nil {
return nil, errors.New("image is nil")
}
// resize to width 640 using Lanczos resampling
// and preserve aspect ratio
m := resize.Resize(newWidth, newHeight, img, resize.Lanczos3)
return &m, nil
}
func ResizeHeightWithAspectRatio(newWidth int, width int, height int) (int, int) {
if newWidth <= 0 || width <= 0 || height <= 0 {
return width, height
}
// Calculate the new height based on the aspect ratio
newHeight := (newWidth * height) / width
// Return the new dimensions
return newWidth, newHeight
}

1379
machinery/src/video/mp4.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,176 @@
package video
import (
"fmt"
"os"
"testing"
mp4ff "github.com/Eyevinn/mp4ff/mp4"
"github.com/kerberos-io/agent/machinery/src/models"
)
// TestMP4Duration creates an MP4 file simulating a 5-second video recording
// and verifies that the durations in all boxes match the sum of sample durations.
func TestMP4Duration(t *testing.T) {
tmpFile := "/tmp/test_duration.mp4"
defer os.Remove(tmpFile)
// Minimal SPS for H.264 (baseline, 640x480) - proper Annex B format with start code
sps := []byte{0x67, 0x42, 0xc0, 0x1e, 0xd9, 0x00, 0xa0, 0x47, 0xfe, 0xc8}
pps := []byte{0x68, 0xce, 0x38, 0x80}
mp4Video := NewMP4(tmpFile, [][]byte{sps}, [][]byte{pps}, nil, 10)
mp4Video.SetWidth(640)
mp4Video.SetHeight(480)
videoTrack := mp4Video.AddVideoTrack("H264")
// Simulate 5 seconds at 25fps (200 frames, keyframe every 50 frames = 2s)
// PTS in milliseconds (timescale=1000)
frameDuration := uint64(40) // 40ms per frame = 25fps
numFrames := 150
gopSize := 50
// Create a fake Annex B NAL unit (keyframe IDR = type 5, non-keyframe = type 1)
makeFrame := func(isKey bool) []byte {
nalType := byte(0x01) // non-IDR slice
if isKey {
nalType = 0x65 // IDR slice
}
// Start code (4 bytes) + NAL header + some data
frame := []byte{0x00, 0x00, 0x00, 0x01, nalType}
// Add some padding data
for i := 0; i < 100; i++ {
frame = append(frame, byte(i))
}
return frame
}
var expectedDuration uint64
for i := 0; i < numFrames; i++ {
pts := uint64(i) * frameDuration
isKeyframe := i%gopSize == 0
err := mp4Video.AddSampleToTrack(videoTrack, isKeyframe, makeFrame(isKeyframe), pts)
if err != nil {
t.Fatalf("AddSampleToTrack failed at frame %d: %v", i, err)
}
}
expectedDuration = uint64(numFrames) * frameDuration // Should be 6000ms (150 * 40)
// Close with config that has signing key to avoid nil panics
config := &models.Config{
Signing: &models.Signing{
PrivateKey: "",
},
}
mp4Video.Close(config)
// Log what the code computed
t.Logf("VideoTotalDuration: %d ms", mp4Video.VideoTotalDuration)
t.Logf("Expected duration: %d ms", expectedDuration)
t.Logf("Segments: %d", len(mp4Video.SegmentDurations))
var sumSegDur uint64
for i, d := range mp4Video.SegmentDurations {
t.Logf(" Segment %d: duration=%d ms", i, d)
sumSegDur += d
}
t.Logf("Sum of segment durations: %d ms", sumSegDur)
// Now read back the file and inspect the boxes
f, err := os.Open(tmpFile)
if err != nil {
t.Fatalf("Failed to open output file: %v", err)
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
t.Fatalf("Failed to stat output file: %v", err)
}
parsedFile, err := mp4ff.DecodeFile(f)
if err != nil {
t.Fatalf("Failed to decode MP4: %v", err)
}
t.Logf("File size: %d bytes", fi.Size())
// Check moov box
if parsedFile.Moov == nil {
t.Fatal("No moov box found")
}
// Check mvhd duration
mvhd := parsedFile.Moov.Mvhd
t.Logf("mvhd.Duration: %d (timescale=%d) = %.2f seconds", mvhd.Duration, mvhd.Timescale, float64(mvhd.Duration)/float64(mvhd.Timescale))
t.Logf("mvhd.Rate: 0x%08x", mvhd.Rate)
t.Logf("mvhd.Volume: 0x%04x", mvhd.Volume)
// Check each trak
for i, trak := range parsedFile.Moov.Traks {
t.Logf("Track %d:", i)
t.Logf(" tkhd.Duration: %d", trak.Tkhd.Duration)
t.Logf(" mdhd.Duration: %d (timescale=%d) = %.2f seconds", trak.Mdia.Mdhd.Duration, trak.Mdia.Mdhd.Timescale, float64(trak.Mdia.Mdhd.Duration)/float64(trak.Mdia.Mdhd.Timescale))
}
// Check mvex/mehd
if parsedFile.Moov.Mvex != nil && parsedFile.Moov.Mvex.Mehd != nil {
t.Logf("mehd.FragmentDuration: %d", parsedFile.Moov.Mvex.Mehd.FragmentDuration)
}
// Sum up actual sample durations from trun boxes in all segments
var actualTrunDuration uint64
var sampleCount int
for _, seg := range parsedFile.Segments {
for _, frag := range seg.Fragments {
for _, traf := range frag.Moof.Trafs {
// Only count video track (track 1)
if traf.Tfhd.TrackID == 1 {
for _, trun := range traf.Truns {
for _, s := range trun.Samples {
actualTrunDuration += uint64(s.Dur)
sampleCount++
}
}
}
}
}
}
t.Logf("Actual trun sample count: %d", sampleCount)
t.Logf("Actual trun total duration: %d ms", actualTrunDuration)
// Check sidx
if parsedFile.Sidx != nil {
var sidxDuration uint64
for _, ref := range parsedFile.Sidx.SidxRefs {
sidxDuration += uint64(ref.SubSegmentDuration)
}
t.Logf("sidx total duration: %d ms", sidxDuration)
}
// VERIFY: All duration values should be consistent
// The expected duration for 150 frames at 40ms each:
// - The sample-buffering pattern means the LAST sample uses LastVideoSampleDTS as duration
// - So all 150 samples should produce 150 * 40ms = 6000ms total
// But due to the pending sample pattern, the actual trun durations might differ
fmt.Println()
fmt.Println("=== DURATION CONSISTENCY CHECK ===")
fmt.Printf("Expected (150 * 40ms): %d ms\n", expectedDuration)
fmt.Printf("mvhd.Duration: %d ms\n", mvhd.Duration)
fmt.Printf("tkhd.Duration: %d ms\n", parsedFile.Moov.Traks[0].Tkhd.Duration)
fmt.Printf("mdhd.Duration: %d ms\n", parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration)
fmt.Printf("Actual trun durations sum: %d ms\n", actualTrunDuration)
fmt.Printf("VideoTotalDuration: %d ms\n", mp4Video.VideoTotalDuration)
fmt.Printf("Sum of SegmentDurations: %d ms\n", sumSegDur)
fmt.Println()
// The key assertion: header duration must equal trun sum
if mvhd.Duration != actualTrunDuration {
t.Errorf("MISMATCH: mvhd.Duration (%d) != actual trun sum (%d), diff = %d ms",
mvhd.Duration, actualTrunDuration, int64(mvhd.Duration)-int64(actualTrunDuration))
}
if parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration != 0 {
t.Errorf("MISMATCH: mdhd.Duration should be 0 for fragmented MP4, got %d",
parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration)
}
}

View File

@@ -1,35 +1,127 @@
package webrtc
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"strconv"
"sync"
"sync/atomic"
"time"
//"github.com/izern/go-fdkaac/fdkaac"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/joy4/av/pubsub"
"github.com/kerberos-io/agent/machinery/src/packets"
mqtt "github.com/eclipse/paho.mqtt.golang"
av "github.com/kerberos-io/joy4/av"
"github.com/kerberos-io/joy4/cgo/ffmpeg"
h264parser "github.com/kerberos-io/joy4/codec/h264parser"
pionWebRTC "github.com/pion/webrtc/v3"
pionMedia "github.com/pion/webrtc/v3/pkg/media"
"github.com/pion/interceptor"
"github.com/pion/interceptor/pkg/intervalpli"
pionWebRTC "github.com/pion/webrtc/v4"
pionMedia "github.com/pion/webrtc/v4/pkg/media"
)
var (
CandidatesMutex sync.Mutex
CandidateArrays map[string](chan string)
peerConnectionCount int64
peerConnections map[string]*pionWebRTC.PeerConnection
//encoder *ffmpeg.VideoEncoder
const (
// Channel buffer sizes
candidateChannelBuffer = 100
rtcpBufferSize = 1500
// Timeouts and intervals
keepAliveTimeout = 15 * time.Second
defaultTimeout = 10 * time.Second
// Track identifiers
trackStreamID = "kerberos-stream"
)
// ConnectionManager manages WebRTC peer connections in a thread-safe manner
type ConnectionManager struct {
mu sync.RWMutex
candidateChannels map[string]chan string
peerConnections map[string]*peerConnectionWrapper
peerConnectionCount int64
}
// peerConnectionWrapper wraps a peer connection with additional metadata
type peerConnectionWrapper struct {
conn *pionWebRTC.PeerConnection
cancelCtx context.CancelFunc
done chan struct{}
closeOnce sync.Once
}
var globalConnectionManager = NewConnectionManager()
// NewConnectionManager creates a new connection manager
func NewConnectionManager() *ConnectionManager {
return &ConnectionManager{
candidateChannels: make(map[string]chan string),
peerConnections: make(map[string]*peerConnectionWrapper),
}
}
// GetOrCreateCandidateChannel gets or creates a candidate channel for a session
func (cm *ConnectionManager) GetOrCreateCandidateChannel(sessionKey string) chan string {
cm.mu.Lock()
defer cm.mu.Unlock()
if ch, exists := cm.candidateChannels[sessionKey]; exists {
return ch
}
ch := make(chan string, candidateChannelBuffer)
cm.candidateChannels[sessionKey] = ch
return ch
}
// CloseCandidateChannel safely closes and removes a candidate channel
func (cm *ConnectionManager) CloseCandidateChannel(sessionKey string) {
cm.mu.Lock()
defer cm.mu.Unlock()
if ch, exists := cm.candidateChannels[sessionKey]; exists {
close(ch)
delete(cm.candidateChannels, sessionKey)
}
}
// AddPeerConnection adds a peer connection to the manager
func (cm *ConnectionManager) AddPeerConnection(sessionID string, wrapper *peerConnectionWrapper) {
cm.mu.Lock()
defer cm.mu.Unlock()
cm.peerConnections[sessionID] = wrapper
}
// RemovePeerConnection removes a peer connection from the manager
func (cm *ConnectionManager) RemovePeerConnection(sessionID string) {
cm.mu.Lock()
defer cm.mu.Unlock()
if wrapper, exists := cm.peerConnections[sessionID]; exists {
if wrapper.cancelCtx != nil {
wrapper.cancelCtx()
}
delete(cm.peerConnections, sessionID)
}
}
// GetPeerConnectionCount returns the current count of active peer connections
func (cm *ConnectionManager) GetPeerConnectionCount() int64 {
return atomic.LoadInt64(&cm.peerConnectionCount)
}
// IncrementPeerCount atomically increments the peer connection count
func (cm *ConnectionManager) IncrementPeerCount() int64 {
return atomic.AddInt64(&cm.peerConnectionCount, 1)
}
// DecrementPeerCount atomically decrements the peer connection count
func (cm *ConnectionManager) DecrementPeerCount() int64 {
return atomic.AddInt64(&cm.peerConnectionCount, -1)
}
type WebRTC struct {
Name string
StunServers []string
@@ -40,24 +132,6 @@ type WebRTC struct {
PacketsCount chan int
}
// No longer used, is for transcoding, might comeback on this!
/*func init() {
// Encoder is created for once and for all.
var err error
encoder, err = ffmpeg.NewVideoEncoderByCodecType(av.H264)
if err != nil {
return
}
if encoder == nil {
err = fmt.Errorf("Video encoder not found")
return
}
encoder.SetFramerate(30, 1)
encoder.SetPixelFormat(av.I420)
encoder.SetBitrate(1000000) // 1MB
encoder.SetGopSize(30 / 1) // 1s
}*/
func CreateWebRTC(name string, stunServers []string, turnServers []string, turnServersUsername string, turnServersCredential string) *WebRTC {
return &WebRTC{
Name: name,
@@ -65,15 +139,14 @@ func CreateWebRTC(name string, stunServers []string, turnServers []string, turnS
TurnServers: turnServers,
TurnServersUsername: turnServersUsername,
TurnServersCredential: turnServersCredential,
Timer: time.NewTimer(time.Second * 10),
PacketsCount: make(chan int),
Timer: time.NewTimer(defaultTimeout),
}
}
func (w WebRTC) DecodeSessionDescription(data string) ([]byte, error) {
sd, err := base64.StdEncoding.DecodeString(data)
if err != nil {
log.Log.Error("DecodeString error: " + err.Error())
log.Log.Error("webrtc.main.DecodeSessionDescription(): " + err.Error())
return []byte{}, err
}
return sd, nil
@@ -87,28 +160,88 @@ func (w WebRTC) CreateOffer(sd []byte) pionWebRTC.SessionDescription {
return offer
}
func InitializeWebRTCConnection(configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, videoTrack *pionWebRTC.TrackLocalStaticSample, audioTrack *pionWebRTC.TrackLocalStaticSample, handshake models.SDPPayload, candidates chan string) {
func RegisterCandidates(key string, candidate models.ReceiveHDCandidatesPayload) {
ch := globalConnectionManager.GetOrCreateCandidateChannel(key)
log.Log.Info("webrtc.main.RegisterCandidates(): " + candidate.Candidate)
select {
case ch <- candidate.Candidate:
default:
log.Log.Info("webrtc.main.RegisterCandidates(): channel is full, dropping candidate")
}
}
func RegisterDefaultInterceptors(mediaEngine *pionWebRTC.MediaEngine, interceptorRegistry *interceptor.Registry) error {
if err := pionWebRTC.ConfigureNack(mediaEngine, interceptorRegistry); err != nil {
return err
}
if err := pionWebRTC.ConfigureRTCPReports(interceptorRegistry); err != nil {
return err
}
if err := pionWebRTC.ConfigureSimulcastExtensionHeaders(mediaEngine); err != nil {
return err
}
return nil
}
func InitializeWebRTCConnection(configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, videoTrack *pionWebRTC.TrackLocalStaticSample, audioTrack *pionWebRTC.TrackLocalStaticSample, handshake models.RequestHDStreamPayload) {
config := configuration.Config
deviceKey := config.Key
stunServers := []string{config.STUNURI}
turnServers := []string{config.TURNURI}
turnServersUsername := config.TURNUsername
turnServersCredential := config.TURNPassword
// We create a channel which will hold the candidates for this session.
sessionKey := config.Key + "/" + handshake.SessionID
candidateChannel := globalConnectionManager.GetOrCreateCandidateChannel(sessionKey)
// Set variables
hubKey := handshake.HubKey
sessionDescription := handshake.SessionDescription
// Create WebRTC object
w := CreateWebRTC(deviceKey, stunServers, turnServers, turnServersUsername, turnServersCredential)
sd, err := w.DecodeSessionDescription(handshake.Sdp)
sd, err := w.DecodeSessionDescription(sessionDescription)
if err == nil {
mediaEngine := &pionWebRTC.MediaEngine{}
if err := mediaEngine.RegisterDefaultCodecs(); err != nil {
log.Log.Error("InitializeWebRTCConnection: something went wrong registering codecs.")
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong registering codecs for media engine: " + err.Error())
}
api := pionWebRTC.NewAPI(pionWebRTC.WithMediaEngine(mediaEngine))
// Create a InterceptorRegistry. This is the user configurable RTP/RTCP Pipeline.
// This provides NACKs, RTCP Reports and other features. If you use `webrtc.NewPeerConnection`
// this is enabled by default. If you are manually managing You MUST create a InterceptorRegistry
// for each PeerConnection.
interceptorRegistry := &interceptor.Registry{}
// Use the default set of Interceptors
if err := pionWebRTC.RegisterDefaultInterceptors(mediaEngine, interceptorRegistry); err != nil {
panic(err)
}
// Register a intervalpli factory
// This interceptor sends a PLI every 3 seconds. A PLI causes a video keyframe to be generated by the sender.
// This makes our video seekable and more error resilent, but at a cost of lower picture quality and higher bitrates
// A real world application should process incoming RTCP packets from viewers and forward them to senders
intervalPliFactory, err := intervalpli.NewReceiverInterceptor()
if err != nil {
panic(err)
}
interceptorRegistry.Add(intervalPliFactory)
api := pionWebRTC.NewAPI(
pionWebRTC.WithMediaEngine(mediaEngine),
pionWebRTC.WithInterceptorRegistry(interceptorRegistry),
)
policy := pionWebRTC.ICETransportPolicyAll
if config.ForceTurn == "true" {
policy = pionWebRTC.ICETransportPolicyRelay
}
peerConnection, err := api.NewPeerConnection(
pionWebRTC.Configuration{
@@ -122,283 +255,512 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
Credential: w.TurnServersCredential,
},
},
//ICETransportPolicy: pionWebRTC.ICETransportPolicyRelay,
ICETransportPolicy: policy,
},
)
if err == nil && peerConnection != nil {
if _, err = peerConnection.AddTrack(videoTrack); err != nil {
panic(err)
// Create context for this connection
ctx, cancel := context.WithCancel(context.Background())
wrapper := &peerConnectionWrapper{
conn: peerConnection,
cancelCtx: cancel,
done: make(chan struct{}),
}
if _, err = peerConnection.AddTrack(audioTrack); err != nil {
panic(err)
var videoSender *pionWebRTC.RTPSender = nil
if videoTrack != nil {
if videoSender, err = peerConnection.AddTrack(videoTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding video track: " + err.Error())
cancel()
return
}
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): video track is nil, skipping video")
}
if err != nil {
panic(err)
}
peerConnection.OnICEConnectionStateChange(func(connectionState pionWebRTC.ICEConnectionState) {
if connectionState == pionWebRTC.ICEConnectionStateDisconnected {
atomic.AddInt64(&peerConnectionCount, -1)
peerConnections[handshake.Cuuid] = nil
close(candidates)
close(w.PacketsCount)
if err := peerConnection.Close(); err != nil {
panic(err)
// Read incoming RTCP packets
// Before these packets are returned they are processed by interceptors. For things
// like NACK this needs to be called.
if videoSender != nil {
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): video RTCP reader stopped")
}()
rtcpBuf := make([]byte, rtcpBufferSize)
for {
select {
case <-ctx.Done():
return
default:
if _, _, rtcpErr := videoSender.Read(rtcpBuf); rtcpErr != nil {
return
}
}
}
} else if connectionState == pionWebRTC.ICEConnectionStateConnected {
atomic.AddInt64(&peerConnectionCount, 1)
} else if connectionState == pionWebRTC.ICEConnectionStateChecking {
for candidate := range candidates {
log.Log.Info("InitializeWebRTCConnection: Received candidate.")
if candidateErr := peerConnection.AddICECandidate(pionWebRTC.ICECandidateInit{Candidate: string(candidate)}); candidateErr != nil {
}()
}
var audioSender *pionWebRTC.RTPSender = nil
if audioTrack != nil {
if audioSender, err = peerConnection.AddTrack(audioTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding audio track: " + err.Error())
cancel()
return
}
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): audio track is nil, skipping audio")
}
// Read incoming RTCP packets
// Before these packets are returned they are processed by interceptors. For things
// like NACK this needs to be called.
if audioSender != nil {
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): audio RTCP reader stopped")
}()
rtcpBuf := make([]byte, rtcpBufferSize)
for {
select {
case <-ctx.Done():
return
default:
if _, _, rtcpErr := audioSender.Read(rtcpBuf); rtcpErr != nil {
return
}
}
}
}()
}
peerConnection.OnConnectionStateChange(func(connectionState pionWebRTC.PeerConnectionState) {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): connection state changed to: " + connectionState.String())
switch connectionState {
case pionWebRTC.PeerConnectionStateDisconnected, pionWebRTC.PeerConnectionStateClosed:
wrapper.closeOnce.Do(func() {
count := globalConnectionManager.DecrementPeerCount()
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Peer disconnected. Active peers: " + string(rune(count)))
// Clean up resources
globalConnectionManager.CloseCandidateChannel(sessionKey)
if err := peerConnection.Close(); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error closing peer connection: " + err.Error())
}
globalConnectionManager.RemovePeerConnection(handshake.SessionID)
close(wrapper.done)
})
case pionWebRTC.PeerConnectionStateConnected:
count := globalConnectionManager.IncrementPeerCount()
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Peer connected. Active peers: " + string(rune(count)))
case pionWebRTC.PeerConnectionStateFailed:
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE connection failed")
}
})
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): candidate processor stopped for session: " + handshake.SessionID)
}()
// Iterate over the candidates and send them to the remote client
for {
select {
case <-ctx.Done():
return
case candidate, ok := <-candidateChannel:
if !ok {
return
}
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Received candidate from channel: " + candidate)
if candidateErr := peerConnection.AddICECandidate(pionWebRTC.ICECandidateInit{Candidate: candidate}); candidateErr != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding candidate: " + candidateErr.Error())
}
}
}
log.Log.Info("InitializeWebRTCConnection: connection state changed to: " + connectionState.String())
log.Log.Info("InitializeWebRTCConnection: Number of peers connected (" + strconv.FormatInt(peerConnectionCount, 10) + ")")
})
}()
offer := w.CreateOffer(sd)
if err = peerConnection.SetRemoteDescription(offer); err != nil {
panic(err)
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while setting remote description: " + err.Error())
}
//gatherCompletePromise := pionWebRTC.GatheringCompletePromise(peerConnection)
answer, err := peerConnection.CreateAnswer(nil)
if err != nil {
panic(err)
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while creating answer: " + err.Error())
} else if err = peerConnection.SetLocalDescription(answer); err != nil {
panic(err)
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while setting local description: " + err.Error())
}
// When an ICE candidate is available send to the other Pion instance
// the other Pion instance will add this candidate by calling AddICECandidate
var candidatesMux sync.Mutex
// When an ICE candidate is available send to the other peer using the signaling server (MQTT).
// The other peer will add this candidate by calling AddICECandidate
var hasRelayCandidates bool
peerConnection.OnICECandidate(func(candidate *pionWebRTC.ICECandidate) {
if candidate == nil {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE gathering complete (candidate is nil)")
if !hasRelayCandidates {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): WARNING - No TURN (relay) candidates were gathered! TURN servers: " +
config.TURNURI + ", Username: " + config.TURNUsername + ", ForceTurn: " + config.ForceTurn)
}
return
}
candidatesMux.Lock()
defer candidatesMux.Unlock()
// Log candidate details for debugging
candidateJSON := candidate.ToJSON()
candidateStr := candidateJSON.Candidate
topic := fmt.Sprintf("%s/%s/candidate/edge", deviceKey, handshake.Cuuid)
log.Log.Info("InitializeWebRTCConnection: Send candidate to " + topic)
candiInit := candidate.ToJSON()
sdpmid := "0"
candiInit.SDPMid = &sdpmid
candi, err := json.Marshal(candiInit)
// Determine candidate type from the candidate string
candidateType := "unknown"
if candidateJSON.Candidate != "" {
switch candidate.Typ {
case pionWebRTC.ICECandidateTypeRelay:
candidateType = "relay"
case pionWebRTC.ICECandidateTypeSrflx:
candidateType = "srflx"
case pionWebRTC.ICECandidateTypeHost:
candidateType = "host"
case pionWebRTC.ICECandidateTypePrflx:
candidateType = "prflx"
}
}
// Track if we received any relay (TURN) candidates
if candidateType == "relay" {
hasRelayCandidates = true
}
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE candidate received - Type: " + candidateType +
", Candidate: " + candidateStr)
// Create a config map
valueMap := make(map[string]interface{})
candateBinary, err := json.Marshal(candidateJSON)
if err == nil {
log.Log.Info("InitializeWebRTCConnection:" + string(candi))
token := mqttClient.Publish(topic, 2, false, candi)
valueMap["candidate"] = string(candateBinary)
// SDP is not needed to be send..
//valueMap["sdp"] = []byte(base64.StdEncoding.EncodeToString([]byte(answer.SDP)))
valueMap["session_id"] = handshake.SessionID
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): sending " + candidateType + " candidate to hub")
} else {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): failed to marshal candidate: " + err.Error())
}
// We'll send the candidate to the hub
message := models.Message{
Payload: models.Payload{
Action: "receive-hd-candidates",
DeviceId: configuration.Config.Key,
Value: valueMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
token := mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
token.Wait()
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): while packaging mqtt message: " + err.Error())
}
})
peerConnections[handshake.Cuuid] = peerConnection
// Store peer connection in manager
globalConnectionManager.AddPeerConnection(handshake.SessionID, wrapper)
if err == nil {
topic := fmt.Sprintf("%s/%s/answer", deviceKey, handshake.Cuuid)
log.Log.Info("InitializeWebRTCConnection: Send SDP answer to " + topic)
mqttClient.Publish(topic, 2, false, []byte(base64.StdEncoding.EncodeToString([]byte(answer.SDP))))
// Create a config map
valueMap := make(map[string]interface{})
valueMap["sdp"] = []byte(base64.StdEncoding.EncodeToString([]byte(answer.SDP)))
valueMap["session_id"] = handshake.SessionID
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Send SDP answer")
// We'll send the candidate to the hub
message := models.Message{
Payload: models.Payload{
Action: "receive-hd-answer",
DeviceId: configuration.Config.Key,
Value: valueMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
token := mqttClient.Publish("kerberos/hub/"+hubKey, 2, false, payload)
token.Wait()
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): while packaging mqtt message: " + err.Error())
}
}
}
} else {
log.Log.Error("InitializeWebRTCConnection: NewPeerConnection failed: " + err.Error())
log.Log.Error("Initializwebrtc.main.InitializeWebRTCConnection()eWebRTCConnection: NewPeerConnection failed: " + err.Error())
}
}
func NewVideoTrack(codecs []av.CodecData) *pionWebRTC.TrackLocalStaticSample {
var mimeType string
mimeType = pionWebRTC.MimeTypeH264
outboundVideoTrack, _ := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "video", "pion124")
func NewVideoTrack(streams []packets.Stream) *pionWebRTC.TrackLocalStaticSample {
mimeType := pionWebRTC.MimeTypeH264
outboundVideoTrack, err := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "video", trackStreamID)
if err != nil {
log.Log.Error("webrtc.main.NewVideoTrack(): error creating video track: " + err.Error())
return nil
}
return outboundVideoTrack
}
func NewAudioTrack(codecs []av.CodecData) *pionWebRTC.TrackLocalStaticSample {
func NewAudioTrack(streams []packets.Stream) *pionWebRTC.TrackLocalStaticSample {
var mimeType string
for _, codec := range codecs {
if codec.Type().String() == "OPUS" {
for _, stream := range streams {
if stream.Name == "OPUS" {
mimeType = pionWebRTC.MimeTypeOpus
} else if codec.Type().String() == "PCM_MULAW" {
} else if stream.Name == "PCM_MULAW" {
mimeType = pionWebRTC.MimeTypePCMU
} else if codec.Type().String() == "PCM_ALAW" {
} else if stream.Name == "PCM_ALAW" {
mimeType = pionWebRTC.MimeTypePCMA
}
}
outboundAudioTrack, _ := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "audio", "pion124")
if mimeType == "" {
log.Log.Error("webrtc.main.NewAudioTrack(): no supported audio codec found")
return nil
}
outboundAudioTrack, err := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "audio", trackStreamID)
if err != nil {
log.Log.Error("webrtc.main.NewAudioTrack(): error creating audio track: " + err.Error())
return nil
}
return outboundAudioTrack
}
func WriteToTrack(livestreamCursor *pubsub.QueueCursor, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, videoTrack *pionWebRTC.TrackLocalStaticSample, audioTrack *pionWebRTC.TrackLocalStaticSample, codecs []av.CodecData, decoder *ffmpeg.VideoDecoder, decoderMutex *sync.Mutex) {
// streamState holds state information for the streaming process
type streamState struct {
lastKeepAlive int64
peerCount int64
start bool
receivedKeyFrame bool
lastAudioSample *pionMedia.Sample
lastVideoSample *pionMedia.Sample
}
// codecSupport tracks which codecs are available in the stream
type codecSupport struct {
hasH264 bool
hasPCM_MULAW bool
hasAAC bool
hasOpus bool
}
// detectCodecs examines the stream to determine which codecs are available
func detectCodecs(rtspClient capture.RTSPClient) codecSupport {
support := codecSupport{}
streams, _ := rtspClient.GetStreams()
for _, stream := range streams {
switch stream.Name {
case "H264":
support.hasH264 = true
case "PCM_MULAW":
support.hasPCM_MULAW = true
case "AAC":
support.hasAAC = true
case "OPUS":
support.hasOpus = true
}
}
return support
}
// hasValidCodecs checks if at least one valid video or audio codec is present
func (cs codecSupport) hasValidCodecs() bool {
hasVideo := cs.hasH264
hasAudio := cs.hasPCM_MULAW || cs.hasAAC || cs.hasOpus
return hasVideo || hasAudio
}
// shouldContinueStreaming determines if streaming should continue based on keepalive and peer count
func shouldContinueStreaming(config models.Config, state *streamState) bool {
if config.Capture.ForwardWebRTC != "true" {
return true
}
now := time.Now().Unix()
hasTimedOut := (now - state.lastKeepAlive) > int64(keepAliveTimeout.Seconds())
hasNoPeers := state.peerCount == 0
return !hasTimedOut && !hasNoPeers
}
// updateStreamState updates keepalive and peer count from communication channels
func updateStreamState(communication *models.Communication, state *streamState) {
select {
case keepAliveStr := <-communication.HandleLiveHDKeepalive:
if val, err := strconv.ParseInt(keepAliveStr, 10, 64); err == nil {
state.lastKeepAlive = val
}
default:
}
select {
case peerCountStr := <-communication.HandleLiveHDPeers:
if val, err := strconv.ParseInt(peerCountStr, 10, 64); err == nil {
state.peerCount = val
}
default:
}
}
// writeFinalSamples writes any remaining buffered samples
func writeFinalSamples(state *streamState, videoTrack, audioTrack *pionWebRTC.TrackLocalStaticSample) {
if state.lastVideoSample != nil && videoTrack != nil {
if err := videoTrack.WriteSample(*state.lastVideoSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.writeFinalSamples(): error writing final video sample: " + err.Error())
}
}
if state.lastAudioSample != nil && audioTrack != nil {
if err := audioTrack.WriteSample(*state.lastAudioSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.writeFinalSamples(): error writing final audio sample: " + err.Error())
}
}
}
// processVideoPacket processes a video packet and writes samples to the track
func processVideoPacket(pkt packets.Packet, state *streamState, videoTrack *pionWebRTC.TrackLocalStaticSample, config models.Config) {
if videoTrack == nil {
return
}
// Start at the first keyframe
if pkt.IsKeyFrame {
state.start = true
}
if !state.start {
return
}
sample := pionMedia.Sample{Data: pkt.Data, PacketTimestamp: uint32(pkt.Time)}
if config.Capture.ForwardWebRTC == "true" {
// Remote forwarding not yet implemented
log.Log.Debug("webrtc.main.processVideoPacket(): remote forwarding not implemented")
return
}
if state.lastVideoSample != nil {
duration := sample.PacketTimestamp - state.lastVideoSample.PacketTimestamp
state.lastVideoSample.Duration = time.Duration(duration) * time.Millisecond
if err := videoTrack.WriteSample(*state.lastVideoSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.processVideoPacket(): error writing video sample: " + err.Error())
}
}
state.lastVideoSample = &sample
}
// processAudioPacket processes an audio packet and writes samples to the track
func processAudioPacket(pkt packets.Packet, state *streamState, audioTrack *pionWebRTC.TrackLocalStaticSample, hasAAC bool) {
if audioTrack == nil {
return
}
if hasAAC {
// AAC transcoding not yet implemented
// TODO: Implement AAC to PCM_MULAW transcoding
return
}
sample := pionMedia.Sample{Data: pkt.Data, PacketTimestamp: uint32(pkt.Time)}
if state.lastAudioSample != nil {
duration := sample.PacketTimestamp - state.lastAudioSample.PacketTimestamp
state.lastAudioSample.Duration = time.Duration(duration) * time.Millisecond
if err := audioTrack.WriteSample(*state.lastAudioSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.processAudioPacket(): error writing audio sample: " + err.Error())
}
}
state.lastAudioSample = &sample
}
func WriteToTrack(livestreamCursor *packets.QueueCursor, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, videoTrack *pionWebRTC.TrackLocalStaticSample, audioTrack *pionWebRTC.TrackLocalStaticSample, rtspClient capture.RTSPClient) {
config := configuration.Config
// Make peerconnection map
peerConnections = make(map[string]*pionWebRTC.PeerConnection)
// Set the indexes for the video & audio streams
// Later when we read a packet we need to figure out which track to send it to.
videoIdx := -1
audioIdx := -1
for i, codec := range codecs {
if codec.Type().String() == "H264" && videoIdx < 0 {
videoIdx = i
} else if (codec.Type().String() == "OPUS" || codec.Type().String() == "PCM_MULAW" || codec.Type().String() == "PCM_ALAW") && audioIdx < 0 {
audioIdx = i
}
// Check if at least one track is available
if videoTrack == nil && audioTrack == nil {
log.Log.Error("webrtc.main.WriteToTrack(): both video and audio tracks are nil, cannot proceed")
return
}
if videoIdx == -1 {
log.Log.Error("WriteToTrack: no video codec found.")
} else {
annexbNALUStartCode := func() []byte { return []byte{0x00, 0x00, 0x00, 0x01} }
// Detect available codecs
codecs := detectCodecs(rtspClient)
if config.Capture.TranscodingWebRTC == "true" {
if videoIdx > -1 {
log.Log.Info("WriteToTrack: successfully using a transcoder.")
if !codecs.hasValidCodecs() {
log.Log.Error("webrtc.main.WriteToTrack(): no valid video or audio codec found")
return
}
if config.Capture.TranscodingWebRTC == "true" {
log.Log.Info("webrtc.main.WriteToTrack(): transcoding enabled but not yet implemented")
}
// Initialize streaming state
state := &streamState{
lastKeepAlive: time.Now().Unix(),
peerCount: 0,
}
defer func() {
writeFinalSamples(state, videoTrack, audioTrack)
log.Log.Info("webrtc.main.WriteToTrack(): stopped writing to track")
}()
var pkt packets.Packet
var cursorError error
for cursorError == nil {
pkt, cursorError = livestreamCursor.ReadPacket()
if cursorError != nil {
break
}
// Update state from communication channels
updateStreamState(communication, state)
// Check if we should continue streaming
if !shouldContinueStreaming(config, state) {
state.start = false
state.receivedKeyFrame = false
continue
}
// Skip empty packets
if len(pkt.Data) == 0 || pkt.Data == nil {
state.receivedKeyFrame = false
continue
}
// Wait for first keyframe before processing
if !state.receivedKeyFrame {
if pkt.IsKeyFrame {
state.receivedKeyFrame = true
} else {
continue
}
} else {
log.Log.Info("WriteToTrack: not using a transcoder.")
}
var cursorError error
var pkt av.Packet
var previousTime time.Duration
start := false
receivedKeyFrame := false
codecData := codecs[videoIdx]
lastKeepAlive := "0"
peerCount := "0"
for cursorError == nil {
pkt, cursorError = livestreamCursor.ReadPacket()
bufferDuration := pkt.Time - previousTime
previousTime = pkt.Time
if config.Capture.ForwardWebRTC != "true" && peerConnectionCount == 0 {
start = false
receivedKeyFrame = false
continue
}
select {
case lastKeepAlive = <-communication.HandleLiveHDKeepalive:
default:
}
select {
case peerCount = <-communication.HandleLiveHDPeers:
default:
}
now := time.Now().Unix()
lastKeepAliveN, _ := strconv.ParseInt(lastKeepAlive, 10, 64)
hasTimedOut := (now - lastKeepAliveN) > 15 // if longer then no response in 15 sec.
hasNoPeers := peerCount == "0"
if config.Capture.ForwardWebRTC == "true" && (hasTimedOut || hasNoPeers) {
start = false
receivedKeyFrame = false
continue
}
if len(pkt.Data) == 0 || pkt.Data == nil {
receivedKeyFrame = false
continue
}
if !receivedKeyFrame {
if pkt.IsKeyFrame {
receivedKeyFrame = true
} else {
continue
}
}
if config.Capture.TranscodingWebRTC == "true" {
/*decoderMutex.Lock()
decoder.SetFramerate(30, 1)
frame, err := decoder.Decode(pkt.Data)
decoderMutex.Unlock()
if err == nil && frame != nil && frame.Width() > 0 && frame.Height() > 0 {
var _outpkts []av.Packet
transcodingResolution := config.Capture.TranscodingResolution
newWidth := frame.Width() * int(transcodingResolution) / 100
newHeight := frame.Height() * int(transcodingResolution) / 100
encoder.SetResolution(newWidth, newHeight)
if _outpkts, err = encoder.Encode(frame); err != nil {
}
if len(_outpkts) > 0 {
pkt = _outpkts[0]
codecData, _ = encoder.CodecData()
}
}*/
}
switch int(pkt.Idx) {
case videoIdx:
// For every key-frame pre-pend the SPS and PPS
pkt.Data = pkt.Data[4:]
if pkt.IsKeyFrame {
start = true
pkt.Data = append(annexbNALUStartCode(), pkt.Data...)
pkt.Data = append(codecData.(h264parser.CodecData).PPS(), pkt.Data...)
pkt.Data = append(annexbNALUStartCode(), pkt.Data...)
pkt.Data = append(codecData.(h264parser.CodecData).SPS(), pkt.Data...)
pkt.Data = append(annexbNALUStartCode(), pkt.Data...)
log.Log.Info("WriteToTrack: Sending keyframe")
if config.Capture.ForwardWebRTC == "true" {
log.Log.Info("WriteToTrack: Sending keep a live to remote broker.")
topic := fmt.Sprintf("kerberos/webrtc/keepalive/%s", config.Key)
mqttClient.Publish(topic, 2, false, "1")
}
}
if start {
sample := pionMedia.Sample{Data: pkt.Data, Duration: bufferDuration}
if config.Capture.ForwardWebRTC == "true" {
samplePacket, err := json.Marshal(sample)
if err == nil {
// Write packets
topic := fmt.Sprintf("kerberos/webrtc/packets/%s", config.Key)
mqttClient.Publish(topic, 0, false, samplePacket)
} else {
log.Log.Info("WriteToTrack: Error marshalling frame, " + err.Error())
}
} else {
if err := videoTrack.WriteSample(sample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("WriteToTrack: something went wrong while writing sample: " + err.Error())
}
}
}
case audioIdx:
// We will send the audio
sample := pionMedia.Sample{Data: pkt.Data, Duration: pkt.Time}
if err := audioTrack.WriteSample(sample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("WriteToTrack: something went wrong while writing sample: " + err.Error())
}
}
// Process video or audio packets
if pkt.IsVideo {
processVideoPacket(pkt, state, videoTrack, config)
} else if pkt.IsAudio {
processAudioPacket(pkt, state, audioTrack, codecs.hasAAC)
}
}
for _, p := range peerConnections {
if p != nil {
p.Close()
}
}
peerConnectionCount = 0
log.Log.Info("WriteToTrack: stop writing to track.")
}

4
machinery/update-mod.sh Executable file
View File

@@ -0,0 +1,4 @@
export GOSUMDB=off
rm -rf go.*
go mod init github.com/kerberos-io/agent/machinery
go mod tidy

View File

@@ -1,6 +0,0 @@
#!/bin/sh -e
cp -R $SNAP/data $SNAP_COMMON/
cp -R $SNAP/www $SNAP_COMMON/
cp -R $SNAP/version $SNAP_COMMON/
cp -R $SNAP/mp4fragment $SNAP_COMMON/

View File

@@ -1,23 +0,0 @@
name: kerberosio # you probably want to 'snapcraft register <name>'
base: core22 # the base snap is the execution environment for this snap
version: '3.0.0' # just for humans, typically '1.2+git' or '1.3.2'
summary: A stand-alone open source video surveillance system # 79 char long summary
description: |
Kerberos Agent is an isolated and scalable video (surveillance) management
agent made available as Open Source under the MIT License. This means that
all the source code is available for you or your company, and you can use,
transform and distribute the source code; as long you keep a reference of
the original license. Kerberos Agent can be used for commercial usage.
grade: stable # stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots
environment:
GIN_MODE: release
apps:
agent:
command: main -config /var/snap/kerberosio/common
plugs: [ network, network-bind ]
parts:
agent:
source: . #https://github.com/kerberos-io/agent/releases/download/21c0e01/agent-amd64.tar
plugin: dump

View File

@@ -25,6 +25,7 @@
"jsx-a11y/media-has-caption": "off",
"jsx-a11y/anchor-is-valid": "off",
"jsx-a11y/click-events-have-key-events": "off",
"jsx-a11y/control-has-associated-label": "off",
"jsx-a11y/no-noninteractive-element-interactions": "off",
"jsx-a11y/no-static-element-interactions": "off",
"jsx-a11y/label-has-associated-control": [

View File

@@ -1,7 +1,6 @@
{
"name": "agent-ui",
"version": "0.1.0",
"private": false,
"dependencies": {
"@giantmachines/redux-websocket": "^1.5.1",
"@kerberos-io/ui": "^1.76.0",

View File

@@ -80,19 +80,29 @@
"description_general": "Allgemeine Einstellungen für den Kerberos Agent",
"key": "Schlüssel",
"camera_name": "Kamera Name",
"camera_friendly_name": "Kamera Anzeigename",
"timezone": "Zeitzone",
"select_timezone": "Zeitzone auswählen",
"advanced_configuration": "Erweiterte Konfiguration",
"description_advanced_configuration": "Erweiterte Einstellungen um Funktionen des Kerberos Agent zu aktivieren oder deaktivieren",
"offline_mode": "Offline Modus",
"description_offline_mode": "Ausgehende Verbindungen deaktivieren"
"description_offline_mode": "Ausgehende Verbindungen deaktivieren",
"encryption": "Encryption",
"description_encryption": "Enable encryption for all outgoing traffic. MQTT messages and/or recordings will be encrypted using AES-256. A private key is used for signing.",
"encryption_enabled": "Enable MQTT encryption",
"description_encryption_enabled": "Enable encryption for all MQTT messages.",
"encryption_recordings_enabled": "Enable recording encryption",
"description_encryption_recordings_enabled": "Enable encryption for all recordings.",
"encryption_fingerprint": "Fingerprint",
"encryption_privatekey": "Private key",
"encryption_symmetrickey": "Symmetric key"
},
"camera": {
"camera": "Kamera",
"description_camera": "Diese Einstellungen sind notwendig um eine Verbindung mit der Kamera herzustellen",
"only_h264": "Aktuell werden nur H264 RTSP kompatible Kameras unterstützt",
"only_h264": "Aktuell werden nur H264/H265 RTSP kompatible Kameras unterstützt",
"rtsp_url": "RTSP URL",
"rtsp_h264": "H264 RTSP URL der Kamera",
"rtsp_h264": "H264/H265 RTSP URL der Kamera",
"sub_rtsp_url": "RTSP url für die Live Übertragung.",
"sub_rtsp_h264": "Ergänzende URL der Kamera mit geringerer Auflösung für die Live Übertragung.",
"onvif": "ONVIF",
@@ -136,6 +146,8 @@
"turn_server": "TURN Server",
"turn_username": "Benutzername",
"turn_password": "Passwort",
"force_turn": "Erzwinge TURN",
"force_turn_description": "Erzwinge die Verwendung von TURN",
"stun_turn_forward": "Weiterleiten und transkodieren",
"stun_turn_description_forward": "Optiemierungen und Verbesserungen der TURN/STUN Kommunikation.",
"stun_turn_webrtc": "Weiterleiten an WebRTC Schnittstelle",
@@ -176,6 +188,8 @@
"description_persistence": "Die möglichkeit zur Speicherung der Daten an einem Zentralen Ort ist der Beginn einer effektiven Videoüberwachung. Es kann zwischen",
"description2_persistence": ", oder einem Drittanbieter gewählt werden.",
"select_persistence": "Speicherort auswählen",
"kerberoshub_encryption": "Encryption",
"kerberoshub_encryption_description": "All traffic from/to Kerberos Hub will encrypted using AES-256.",
"kerberoshub_proxyurl": "Kerberos Hub Proxy URL",
"kerberoshub_description_proxyurl": "Der Proxy Endpunkt zum hochladen der Aufnahmen.",
"kerberoshub_apiurl": "Kerberos Hub API URL",

View File

@@ -9,7 +9,7 @@
},
"navigation": {
"profile": "Profile",
"admin": "admin",
"admin": "Admin",
"management": "Management",
"dashboard": "Dashboard",
"recordings": "Recordings",
@@ -23,7 +23,7 @@
},
"dashboard": {
"title": "Dashboard",
"heading": "Overview of your video surveilance",
"heading": "Overview of your video surveillance",
"number_of_days": "Number of days",
"total_recordings": "Total recordings",
"connected": "Connected",
@@ -32,11 +32,11 @@
"latest_events": "Latest events",
"configure_connection": "Configure connection",
"no_events": "No events",
"no_events_description": "No recordings where found, make sure your Kerberos Agent is properly configured.",
"no_events_description": "No recordings were found, make sure your Agent is properly configured.",
"motion_detected": "Motion was detected",
"live_view": "Live view",
"loading_live_view": "Loading live view",
"loading_live_view_description": "Hold on we are loading your live view here. If you didn't configure your camera connection, update it on the settings pages.",
"loading_live_view_description": "Hold on, we are loading your live view here. If you didn't configure your camera connection, update it on the settings pages.",
"time": "Time",
"description": "Description",
"name": "Name"
@@ -59,41 +59,51 @@
"persistence": "Persistence"
},
"info": {
"kerberos_hub_demo": "Have a look at our Kerberos Hub demo environment, to see Kerberos Hub in action!",
"configuration_updated_success": "Your configuration have been updated successfully.",
"kerberos_hub_demo": "Have a look at our Hub demo environment, to see Hub in action!",
"configuration_updated_success": "Your configuration has been updated successfully.",
"configuration_updated_error": "Something went wrong while saving.",
"verify_hub": "Verifying your Kerberos Hub settings.",
"verify_hub_success": "Kerberos Hub settings are successfully verified.",
"verify_hub_error": "Something went wrong while verifying Kerberos Hub",
"verify_hub": "Verifying your Hub settings.",
"verify_hub_success": "Hub settings are successfully verified.",
"verify_hub_error": "Something went wrong while verifying Hub.",
"verify_persistence": "Verifying your persistence settings.",
"verify_persistence_success": "Persistence settings are successfully verified.",
"verify_persistence_error": "Something went wrong while verifying the persistence",
"verify_persistence_error": "Something went wrong while verifying the persistence.",
"verify_camera": "Verifying your camera settings.",
"verify_camera_success": "Camera settings are successfully verified.",
"verify_camera_error": "Something went wrong while verifying the camera settings",
"verify_camera_error": "Something went wrong while verifying the camera settings.",
"verify_onvif": "Verifying your ONVIF settings.",
"verify_onvif_success": "ONVIF settings are successfully verified.",
"verify_onvif_error": "Something went wrong while verifying the ONVIF settings"
"verify_onvif_error": "Something went wrong while verifying the ONVIF settings."
},
"overview": {
"general": "General",
"description_general": "General settings for your Kerberos Agent",
"description_general": "General settings for your Agent",
"key": "Key",
"camera_name": "Camera name",
"camera_friendly_name": "Friendly name",
"timezone": "Timezone",
"select_timezone": "Select a timezone",
"advanced_configuration": "Advanced configuration",
"description_advanced_configuration": "Detailed configuration options to enable or disable specific parts of the Kerberos Agent",
"description_advanced_configuration": "Detailed configuration options to enable or disable specific parts of the Agent",
"offline_mode": "Offline mode",
"description_offline_mode": "Disable all outgoing traffic"
"description_offline_mode": "Disable all outgoing traffic",
"encryption": "Encryption",
"description_encryption": "Enable encryption for all outgoing traffic. MQTT messages and/or recordings will be encrypted using AES-256. A private key is used for signing.",
"encryption_enabled": "Enable MQTT encryption",
"description_encryption_enabled": "Enable encryption for all MQTT messages.",
"encryption_recordings_enabled": "Enable recording encryption",
"description_encryption_recordings_enabled": "Enable encryption for all recordings.",
"encryption_fingerprint": "Fingerprint",
"encryption_privatekey": "Private key",
"encryption_symmetrickey": "Symmetric key"
},
"camera": {
"camera": "Camera",
"description_camera": "Camera settings are required to make a connection to your camera of choice.",
"only_h264": "Currently only H264 RTSP streams are supported.",
"rtsp_url": "RTSP url",
"rtsp_h264": "A H264 RTSP connection to your camera.",
"sub_rtsp_url": "Sub RTSP url (used for livestreaming)",
"only_h264": "Currently only H264/H265 RTSP streams are supported.",
"rtsp_url": "RTSP URL",
"rtsp_h264": "A H264/H265 RTSP connection to your camera.",
"sub_rtsp_url": "Sub RTSP URL (used for livestreaming)",
"sub_rtsp_h264": "A secondary RTSP connection to the low resolution of your camera.",
"onvif": "ONVIF",
"description_onvif": "Credentials to communicate with ONVIF capabilities. These are used for PTZ or other capabilities provided by the camera.",
@@ -105,28 +115,28 @@
},
"recording": {
"recording": "Recording",
"description_recording": "Specify how you would like to make recordings. Having a continuous 24/7 setup or a motion based recording.",
"description_recording": "Specify how you would like to make recordings. Having a continuous 24/7 setup or a motion-based recording.",
"continuous_recording": "Continuous recording",
"description_continuous_recording": "Make 24/7 or motion based recordings.",
"max_duration": "max video duration (seconds)",
"description_continuous_recording": "Make 24/7 or motion-based recordings.",
"max_duration": "Max video duration (seconds)",
"description_max_duration": "The maximum duration of a recording.",
"pre_recording": "pre recording (key frames buffered)",
"pre_recording": "Pre recording (key frames buffered)",
"description_pre_recording": "Seconds before an event occurred.",
"post_recording": "post recording (seconds)",
"post_recording": "Post recording (seconds)",
"description_post_recording": "Seconds after an event occurred.",
"threshold": "Recording threshold (pixels)",
"description_threshold": "The number of pixels changed to record",
"description_threshold": "The number of pixels changed to record.",
"autoclean": "Auto clean",
"description_autoclean": "Specify if the Kerberos Agent can cleanup recordings when a specific storage capacity (MB) is reached. This will remove the oldest recordings when the capacity is reached.",
"description_autoclean": "Specify if the Agent can clean up recordings when a specific storage capacity (MB) is reached. This will remove the oldest recordings when the capacity is reached.",
"autoclean_enable": "Enable auto clean",
"autoclean_description_enable": "Remove oldest recording when capacity reached.",
"autoclean_max_directory_size": "Maximum directory size (MB)",
"autoclean_description_max_directory_size": "The maximum MB's of recordings stored.",
"autoclean_description_max_directory_size": "The maximum MBs of recordings stored.",
"fragmentedrecordings": "Fragmented recordings",
"description_fragmentedrecordings": "When recordings are fragmented they are suitable for an HLS stream. When turned on the MP4 container will look a bit different.",
"description_fragmentedrecordings": "When recordings are fragmented they are suitable for an HLS stream. When turned on, the MP4 container will look a bit different.",
"fragmentedrecordings_enable": "Enable fragmentation",
"fragmentedrecordings_description_enable": "Fragmented recordings are required for HLS.",
"fragmentedrecordings_duration": "fragment duration",
"fragmentedrecordings_duration": "Fragment duration",
"fragmentedrecordings_description_duration": "Duration of a single fragment."
},
"streaming": {
@@ -136,19 +146,26 @@
"turn_server": "TURN server",
"turn_username": "Username",
"turn_password": "Password",
"force_turn": "Force TURN",
"force_turn_description": "Force TURN usage, even when STUN is available.",
"stun_turn_forward": "Forwarding and transcoding",
"stun_turn_description_forward": "Optimisations and enhancements for TURN/STUN communication.",
"stun_turn_description_forward": "Optimizations and enhancements for TURN/STUN communication.",
"stun_turn_webrtc": "Forwarding to WebRTC broker",
"stun_turn_description_webrtc": "Forward h264 stream through MQTT",
"stun_turn_description_webrtc": "Forward H264 stream through MQTT",
"stun_turn_transcode": "Transcode stream",
"stun_turn_description_transcode": "Convert stream to a lower resolution",
"stun_turn_downscale": "Downscale resolution (in % or original resolution)",
"stun_turn_downscale": "Downscale resolution (in % of original resolution)",
"mqtt": "MQTT",
"description_mqtt": "A MQTT broker is used to communicate from",
"description2_mqtt": "to the Kerberos Agent, to achieve for example livestreaming or ONVIF (PTZ) capabilities.",
"mqtt_brokeruri": "Broker Uri",
"description_mqtt": "An MQTT broker is used to communicate from",
"description2_mqtt": "to the Agent, to achieve for example livestreaming or ONVIF (PTZ) capabilities.",
"mqtt_brokeruri": "Broker URI",
"mqtt_username": "Username",
"mqtt_password": "Password"
"mqtt_password": "Password",
"realtimeprocessing": "Realtime Processing",
"description_realtimeprocessing": "By enabling realtime processing, you will receive realtime video keyframes through the MQTT connection specified above.",
"realtimeprocessing_topic": "Topic to publish",
"realtimeprocessing_enabled": "Enable realtime processing",
"description_realtimeprocessing_enabled": "Send realtime video keyframes through MQTT."
},
"conditions": {
"timeofinterest": "Time Of Interest",
@@ -163,53 +180,61 @@
"friday": "Friday",
"saturday": "Saturday",
"externalcondition": "External Condition",
"description_externalcondition": "Depending on an external webservice recording can be enabled or disabled.",
"description_externalcondition": "Depending on an external web service, recording can be enabled or disabled.",
"regionofinterest": "Region Of Interest",
"description_regionofinterest": "By defining one or more regions, motion will be tracked only in the regions you have defined."
},
"persistence": {
"kerberoshub": "Kerberos Hub",
"description_kerberoshub": "Kerberos Agents can send heartbeats to a central",
"description2_kerberoshub": "installation. Heartbeats and other relevant information are synced to Kerberos Hub to show realtime information about your video landscape.",
"kerberoshub": "Hub",
"description_kerberoshub": "Agents can send heartbeats to a central",
"description2_kerberoshub": "installation. Heartbeats and other relevant information are synced to Hub to show realtime information about your video landscape.",
"persistence": "Persistence",
"saasoffering": "Kerberos Hub (SAAS offering)",
"secondary_persistence": "Secondary Persistence",
"description_secondary_persistence": "Recordings will be sent to secondary persistence if the primary persistence is unavailable or fails. This can be useful for failover purposes.",
"saasoffering": "Hub (SaaS offering)",
"description_persistence": "Having the ability to store your recordings is the beginning of everything. You can choose between our",
"description2_persistence": ", or a 3rd party provider",
"select_persistence": "Select a persistence",
"kerberoshub_proxyurl": "Kerberos Hub Proxy URL",
"kerberoshub_encryption": "Encryption",
"kerberoshub_encryption_description": "All traffic from/to Hub will be encrypted using AES-256.",
"kerberoshub_proxyurl": "Hub Proxy URL",
"kerberoshub_description_proxyurl": "The Proxy endpoint for uploading your recordings.",
"kerberoshub_apiurl": "Kerberos Hub API URL",
"kerberoshub_apiurl": "Hub API URL",
"kerberoshub_description_apiurl": "The API endpoint for uploading your recordings.",
"kerberoshub_publickey": "Public key",
"kerberoshub_description_publickey": "The public key granted to your Kerberos Hub account.",
"kerberoshub_description_publickey": "The public key granted to your Hub account.",
"kerberoshub_privatekey": "Private key",
"kerberoshub_description_privatekey": "The private key granted to your Kerberos Hub account.",
"kerberoshub_description_privatekey": "The private key granted to your Hub account.",
"kerberoshub_site": "Site",
"kerberoshub_description_site": "The site ID the Kerberos Agents are belonging to in Kerberos Hub.",
"kerberoshub_description_site": "The site ID the Agents belong to in Hub.",
"kerberoshub_region": "Region",
"kerberoshub_description_region": "The region we are storing our recordings in.",
"kerberoshub_bucket": "Bucket",
"kerberoshub_description_bucket": "The bucket we are storing our recordings in.",
"kerberoshub_username": "Username/Directory (should match Kerberos Hub username)",
"kerberoshub_description_username": "The username of your Kerberos Hub account.",
"kerberosvault_apiurl": "Kerberos Vault API URL",
"kerberosvault_description_apiurl": "The Kerberos Vault API",
"kerberoshub_username": "Username/Directory (should match Hub username)",
"kerberoshub_description_username": "The username of your Hub account.",
"kerberosvault_apiurl": "Vault API URL",
"kerberosvault_description_apiurl": "The Vault API",
"kerberosvault_provider": "Provider",
"kerberosvault_description_provider": "The provider to which your recordings will be send.",
"kerberosvault_directory": "Directory (should match Kerberos Hub username)",
"kerberosvault_description_directory": "Sub directory the recordings will be stored in your provider.",
"kerberosvault_description_provider": "The provider to which your recordings will be sent.",
"kerberosvault_directory": "Directory (should match Hub username)",
"kerberosvault_description_directory": "Subdirectory the recordings will be stored in your provider.",
"kerberosvault_accesskey": "Access key",
"kerberosvault_description_accesskey": "The access key of your Kerberos Vault account.",
"kerberosvault_description_accesskey": "The access key of your Vault account.",
"kerberosvault_secretkey": "Secret key",
"kerberosvault_description_secretkey": "The secret key of your Kerberos Vault account.",
"kerberosvault_description_secretkey": "The secret key of your Vault account.",
"kerberosvault_maxretries": "Max retries",
"kerberosvault_description_maxretries": "The maximum number of retries to upload a recording.",
"kerberosvault_timeout": "Timeout",
"kerberosvault_description_timeout": "If a timeout occurs, recordings will be sent directly to the secondary Vault.",
"dropbox_directory": "Directory",
"dropbox_description_directory": "The sub directory where the recordings will be stored in your Dropbox account.",
"dropbox_description_directory": "The subdirectory where the recordings will be stored in your Dropbox account.",
"dropbox_accesstoken": "Access token",
"dropbox_description_accesstoken": "The access token of your Dropbox account/app.",
"verify_connection": "Verify Connection",
"remove_after_upload": "Once recordings are uploaded to some persistence, you might want to remove them from the local Kerberos Agent.",
"remove_after_upload": "Once recordings are uploaded to some persistence, you might want to remove them from the local Agent.",
"remove_after_upload_description": "Remove recordings after they are uploaded successfully.",
"remove_after_upload_enabled": "Enabled delete on upload"
"remove_after_upload_enabled": "Enable delete on upload"
}
}
}
}

View File

@@ -80,19 +80,29 @@
"description_general": "General settings for your Kerberos Agent",
"key": "Key",
"camera_name": "Camera name",
"camera_friendly_name": "Camera friendly name",
"timezone": "Timezone",
"select_timezone": "Select a timezone",
"advanced_configuration": "Advanced configuration",
"description_advanced_configuration": "Detailed configuration options to enable or disable specific parts of the Kerberos Agent",
"offline_mode": "Offline mode",
"description_offline_mode": "Disable all outgoing traffic"
"description_offline_mode": "Disable all outgoing traffic",
"encryption": "Encryption",
"description_encryption": "Enable encryption for all outgoing traffic. MQTT messages and/or recordings will be encrypted using AES-256. A private key is used for signing.",
"encryption_enabled": "Enable MQTT encryption",
"description_encryption_enabled": "Enable encryption for all MQTT messages.",
"encryption_recordings_enabled": "Enable recording encryption",
"description_encryption_recordings_enabled": "Enable encryption for all recordings.",
"encryption_fingerprint": "Fingerprint",
"encryption_privatekey": "Private key",
"encryption_symmetrickey": "Symmetric key"
},
"camera": {
"camera": "Camera",
"description_camera": "Camera settings are required to make a connection to your camera of choice.",
"only_h264": "Currently only H264 RTSP streams are supported.",
"only_h264": "Currently only H264/H265 RTSP streams are supported.",
"rtsp_url": "RTSP url",
"rtsp_h264": "A H264 RTSP connection to your camera.",
"rtsp_h264": "A H264/H265 RTSP connection to your camera.",
"sub_rtsp_url": "Sub RTSP url (used for livestreaming)",
"sub_rtsp_h264": "A secondary RTSP connection to the low resolution of your camera.",
"onvif": "ONVIF",
@@ -136,6 +146,8 @@
"turn_server": "TURN server",
"turn_username": "Username",
"turn_password": "Password",
"force_turn": "Force TURN",
"force_turn_description": "Force TURN usage, even when STUN is available.",
"stun_turn_forward": "Forwarding and transcoding",
"stun_turn_description_forward": "Optimisations and enhancements for TURN/STUN communication.",
"stun_turn_webrtc": "Forwarding to WebRTC broker",
@@ -176,6 +188,8 @@
"description_persistence": "Having the ability to store your recordings is the beginning of everything. You can choose between our",
"description2_persistence": ", or a 3rd party provider",
"select_persistence": "Select a persistence",
"kerberoshub_encryption": "Encryption",
"kerberoshub_encryption_description": "All traffic from/to Kerberos Hub will encrypted using AES-256.",
"kerberoshub_proxyurl": "Kerberos Hub Proxy URL",
"kerberoshub_description_proxyurl": "The Proxy endpoint for uploading your recordings.",
"kerberoshub_apiurl": "Kerberos Hub API URL",

View File

@@ -79,19 +79,29 @@
"description_general": "Paramètres généraux pour votre Agent Kerberos",
"key": "Clé",
"camera_name": "Nom de la caméra",
"camera_friendly_name": "Nom convivial de la caméra",
"timezone": "Fuseau horaire",
"select_timezone": "Sélectionner un fuseau horaire",
"advanced_configuration": "Configuration avancée",
"description_advanced_configuration": "Les options de configuration détaillées pour activer ou désactiver des composants spécifiques de l'Agent Kerberos",
"offline_mode": "Mode hors-ligne",
"description_offline_mode": "Désactiver tout le trafic sortant"
"description_offline_mode": "Désactiver tout le trafic sortant",
"encryption": "Encryption",
"description_encryption": "Enable encryption for all outgoing traffic. MQTT messages and/or recordings will be encrypted using AES-256. A private key is used for signing.",
"encryption_enabled": "Enable MQTT encryption",
"description_encryption_enabled": "Enable encryption for all MQTT messages.",
"encryption_recordings_enabled": "Enable recording encryption",
"description_encryption_recordings_enabled": "Enable encryption for all recordings.",
"encryption_fingerprint": "Fingerprint",
"encryption_privatekey": "Private key",
"encryption_symmetrickey": "Symmetric key"
},
"camera": {
"camera": "Caméra",
"description_camera": "Les paramètres de la caméra sont requis pour établir une connexion à la caméra de votre choix.",
"only_h264": "Actuellement, seuls les flux RTSP H264 sont pris en charge.",
"only_h264": "Actuellement, seuls les flux RTSP H264/H265 sont pris en charge.",
"rtsp_url": "URL RTSP",
"rtsp_h264": "Une connexion RTSP H264 à votre caméra.",
"rtsp_h264": "Une connexion RTSP H264/H265 à votre caméra.",
"sub_rtsp_url": "URL RTSP secondaire (utilisé pour le direct)",
"sub_rtsp_h264": "Une connexion RTSP secondaire vers le flux basse résolution de votre caméra.",
"onvif": "ONVIF",
@@ -135,6 +145,8 @@
"turn_server": "Serveur TURN",
"turn_username": "Nom d'utilisateur",
"turn_password": "Mot de passe",
"force_turn": "Forcer l'utilisation de TURN",
"force_turn_description": "Forcer l'utilisation de TURN au lieu de STUN",
"stun_turn_forward": "Redirection et transcodage",
"stun_turn_description_forward": "Optimisations et améliorations pour la communication TURN/STUN.",
"stun_turn_webrtc": "Redirection pour l'agent WebRTC",
@@ -175,6 +187,8 @@
"description_persistence": "Avoir la possibilité de stocker vos enregistrements est le commencement de tout. Vous pouvez choisir entre notre",
"description2_persistence": " ou auprès d'un fournisseur tiers",
"select_persistence": "Sélectionner une persistance",
"kerberoshub_encryption": "Encryption",
"kerberoshub_encryption_description": "All traffic from/to Kerberos Hub will encrypted using AES-256.",
"kerberoshub_proxyurl": "URL du proxy Kerberos Hub",
"kerberoshub_description_proxyurl": "Le point de terminaison du proxy pour téléverser vos enregistrements.",
"kerberoshub_apiurl": "URL de l'API Kerberos Hub",

Some files were not shown because too many files have changed in this diff Show More