Compare commits

...

225 Commits

Author SHA1 Message Date
Cédric Verstraeten
2c02e0aeb1 Merge pull request #250 from kerberos-io/fix/add-avc-description-fallback
fix/add-avc-description-fallback
2026-02-27 11:48:34 +01:00
cedricve
d5464362bb Add AVC descriptor fallback for SPS parse errors
When setting the AVC descriptor fails in MP4.Close(), attempt a fallback that constructs an AvcC/avc1 sample entry from available SPS/PPS NALUs. Adds github.com/Eyevinn/mp4ff/avc import and two helpers: addAVCDescriptorFallback (builds a visual sample entry, sets tkhd width/height if available, and inserts it into stsd) and buildAVCDecConfRecFromSPS (creates an avc.DecConfRec from SPS/PPS bytes by extracting profile/compat/level and filling defaults). Logs errors and warns when the fallback is used. This provides resilience against SPS parsing errors when writing the MP4 track descriptor.
2026-02-27 11:35:22 +01:00
Cédric Verstraeten
5bcefd0015 Merge pull request #249 from kerberos-io/feature/enhance-avc-hevc-ssp-nalus
feature/enhance-avc-hevc-ssp-nalus
2026-02-27 11:12:03 +01:00
cedricve
5bb9def42d Normalize and debug H264/H265 parameter sets
Replace direct sanitizeParameterSets usage with normalizeH264ParameterSets and normalizeH265ParameterSets in mp4.Close. The new functions split Annex-B blobs, strip start codes, detect NALU types (SPS/PPS for AVC; VPS/SPS/PPS for HEVC), aggregate distinct parameter sets and fall back to sanitizeParameterSets if none are found. Added splitParamSetNALUs and formatNaluDebug helpers and debug logging to output concise parameter-set summaries before setting AVC/HEVC descriptors. These changes improve handling of concatenated Annex-B parameter set blobs and make debugging parameter extraction easier.
2026-02-27 11:09:28 +01:00
Cédric Verstraeten
ff38ccbadf Merge pull request #248 from kerberos-io/fix/sanitize-parameter-sets
fix/sanitize-parameter-sets
2026-02-26 20:43:53 +01:00
cedricve
f64e899de9 Populate/sanitize NALUs and avoid empty MP4
Fill missing SPS/PPS/VPS from camera config before closing recordings and warn when parameter sets are incomplete (for both continuous and motion-detection flows). Sanitize parameter sets (remove Annex-B start codes and drop empty NALUs) before writing AVC/HEVC descriptors. Prevent creation of empty MP4 files by flushing/closing and removing files when no audio/video samples were added, and only add an audio track when audio samples exist.
2026-02-26 20:37:10 +01:00
Cédric Verstraeten
b8a81d18af Merge pull request #247 from kerberos-io/fix/ensure-stsd
fix/ensure-stsd
2026-02-26 17:13:45 +01:00
cedricve
8c2e3e4cdd Recover video parameter sets from Annex B NALUs
Add updateVideoParameterSetsFromAnnexB to parse Annex B NALUs and populate missing SPS/PPS/VPS for H.264/H.265 streams. Call this helper when adding video samples so in-band parameter sets can be recovered early. Also add error logging in Close() when setting AVC/HEVC descriptors fails. These changes improve robustness for streams that carry SPS/PPS/VPS inline.
2026-02-26 17:05:09 +01:00
Cédric Verstraeten
11c4ee518d Merge pull request #246 from kerberos-io/fix/handle-sps-pps-unknown-state
fix/handle-sps-pps-unknown-state
2026-02-26 16:24:54 +01:00
cedricve
51b9d76973 Improve SPS/PPS handling: add warnings for missing SPS/PPS during recording start 2026-02-26 15:24:34 +00:00
cedricve
f3c1cb9b82 Enhance SPS/PPS handling for main stream in gortsplib: add fallback for missing SDP 2026-02-26 15:21:54 +00:00
Cédric Verstraeten
a1368361e4 Merge pull request #242 from kerberos-io/fix/update-workflows-for-nightly-build
fix/update-workflows-for-nightly-build
2026-02-16 12:44:40 +01:00
Cédric Verstraeten
abfdea0179 Update issue-userstory-create.yml 2026-02-16 12:37:49 +01:00
Cédric Verstraeten
8aaeb62fa3 Merge pull request #241 from kerberos-io/fix/update-workflows-for-nightly-build
fix/update-workflows-for-nightly-build
2026-02-16 12:21:06 +01:00
Cédric Verstraeten
e30dd7d4a0 Add nightly build workflow for Docker images 2026-02-16 12:16:39 +01:00
Cédric Verstraeten
ac3f9aa4e8 Merge pull request #240 from kerberos-io/feature/add-issue-generator-workflow
feature/add-issue-generator-workflow
2026-02-16 11:58:06 +01:00
Cédric Verstraeten
04c568f488 Add workflow to create user story issues with customizable inputs 2026-02-16 11:54:07 +01:00
Cédric Verstraeten
e270223968 Merge pull request #238 from kerberos-io/fix/docker-build-release-action
fix/docker-build-release-action
2026-02-13 22:17:33 +01:00
cedricve
01ab1a9218 Disable build provenance in Docker builds
Add --provenance=false to docker build invocations in .github/workflows/release-create.yml (both default and arm64 steps) to suppress Docker provenance metadata during CI builds.
2026-02-13 22:16:23 +01:00
Cédric Verstraeten
6f0794b09c Merge pull request #237 from kerberos-io/feature/fix-quicktime-duration
feature/fix-quicktime-duration
2026-02-13 21:55:41 +01:00
cedricve
1ae6a46d88 Embed build version into binaries
Pass VERSION from CI into Docker builds and embed it into the Go binary via ldflags. Updated .github workflow to supply --build-arg VERSION for both architectures. Added ARG VERSION and logic in Dockerfile and Dockerfile.arm64 to derive the version from git (git describe --tags) or fall back to the provided build-arg, then set it with -X during go build. Changed VERSION in machinery/src/utils/main.go from a const to a var defaulting to "0.0.0" and documented that it is overridden at build time. This ensures released images contain the correct agent version while local/dev builds keep a sensible default.
2026-02-13 21:50:09 +01:00
cedricve
9d83cab5cc Set mdhd.Duration to 0 for fragmented MP4
Uncomment and explicitly set mdhd.Duration = 0 in machinery/src/video/mp4.go for relevant tracks (video H264/H265 and audio track). This ensures mdhd.Duration is zero for fragmented MP4 so players derive duration from fragments (avoiding QuickTime adding fragment durations and doubling the reported duration).
2026-02-13 21:46:32 +01:00
cedricve
6f559c2f00 Align MP4 headers to fragment durations
Compute actual video duration from SegmentDurations and ensure container headers reflect fragment durations. Set mvhd.Duration and mvex/mehd.FragmentDuration to the maximum of video (sum of segments) and audio durations so the overall mvhd matches the longest track. Use the summed segment duration for track tkhd.Duration and keep mdhd.Duration at 0 for fragmented MP4s (to avoid double-counting). Add a warning log when accumulated video duration differs from the recorded VideoTotalDuration. Harden fingerprint generation and private key handling with nil checks.

Add mp4_duration_test.go: unit test that creates a simulated H.264 fragmented MP4 (150 frames at 40ms), closes it, parses the output and verifies that mvhd/mehd and trun sample durations are consistent and that mdhd.Duration is zero.
2026-02-13 21:35:57 +01:00
cedricve
c147944f5a Convert MP4 timestamps to Mac HFS epoch
Add MacEpochOffset constant and convert mp4.StartTime to Mac HFS time for QuickTime compatibility. Compute macTime = mp4.StartTime + MacEpochOffset and use it for mvhd CreationTime/ModificationTime, as well as track tkhd and mdhd creation/modification timestamps for video and audio tracks. Also set mvhd Rate, Volume and NextTrackID. These changes ensure generated MP4s use QuickTime-compatible epoch and include proper mvhd metadata.
2026-02-13 21:01:45 +01:00
Cédric Verstraeten
e8ca776e4e Merge pull request #236 from kerberos-io/fix/debugging-lost-keyframes
fix/debugging-lost-keyframes
2026-02-11 16:51:07 +01:00
Cédric Verstraeten
de5c4b6e0a Merge branch 'master' into fix/debugging-lost-keyframes 2026-02-11 16:48:08 +01:00
Cédric Verstraeten
9ba64de090 add additional logging 2026-02-11 16:48:01 +01:00
Cédric Verstraeten
7ceeebe76e Merge pull request #235 from kerberos-io/fix/debugging-lost-keyframes
fix/debugging-lost-keyframes
2026-02-11 16:15:57 +01:00
Cédric Verstraeten
bd7dbcfcf2 Enhance FPS tracking and logging for keyframes in gortsplib and mp4 modules 2026-02-11 15:11:52 +00:00
Cédric Verstraeten
8c7a46e3ae Merge pull request #234 from kerberos-io/fix/fps-gop-size
fix/fps-gop-size
2026-02-11 15:05:31 +01:00
Cédric Verstraeten
57ccfaabf5 Merge branch 'fix/fps-gop-size' of github.com:kerberos-io/agent into fix/fps-gop-size 2026-02-11 14:59:34 +01:00
Cédric Verstraeten
4a9cb51e95 Update machinery/src/capture/gortsplib.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-11 14:59:15 +01:00
Cédric Verstraeten
ab6f621e76 Merge branch 'fix/fps-gop-size' of github.com:kerberos-io/agent into fix/fps-gop-size 2026-02-11 14:58:44 +01:00
Cédric Verstraeten
c365ae5af2 Ensure thread-safe closure of peer connections in InitializeWebRTCConnection 2026-02-11 13:58:29 +00:00
Cédric Verstraeten
b05c3d1baa Update machinery/src/capture/gortsplib.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-11 14:52:40 +01:00
Cédric Verstraeten
c7c7203fad Merge branch 'master' into fix/fps-gop-size 2026-02-11 14:48:05 +01:00
Cédric Verstraeten
d93f85b4f3 Refactor FPS calculation to use per-stream trackers for improved accuracy 2026-02-11 13:45:07 +00:00
Cédric Verstraeten
031212b98c Merge pull request #232 from kerberos-io/fix/fps-gop-size
fix/fps-gop-size
2026-02-11 14:27:18 +01:00
Cédric Verstraeten
a4837b3cb3 Implement PTS-based FPS calculation and GOP size adjustments 2026-02-11 13:14:29 +00:00
Cédric Verstraeten
77629ac9b8 Merge pull request #231 from kerberos-io/feature/improve-keyframe-interval
feature/improve-keyframe-interval
2026-02-11 12:28:33 +01:00
cedricve
59608394af Use Warning instead of Warn in mp4.go
Replace call to log.Log.Warn with log.Log.Warning in MP4.flushPendingVideoSample to match the logger API. This is a non-functional change that preserves the original message and behavior while using the correct logging method name.
2026-02-11 12:26:18 +01:00
cedricve
9dfcaa466f Refactor video sample flushing logic into a dedicated function 2026-02-11 11:48:15 +01:00
cedricve
88442e4525 Add pending video sample to segment before flush
Before flushing a segment when mp4.Start is true, add any pending VideoFullSample for the current video track to the current fragment. The change computes and updates LastVideoSampleDTS and VideoTotalDuration, adjusts the sample DecodeTime and Dur, calls AddFullSampleToTrack, logs errors, and clears VideoFullSample so the pending sample is included in the segment before starting a new one. This ensures segments contain all frames up to (but not including) the keyframe that triggered the flush.
2026-02-11 11:38:51 +01:00
Cédric Verstraeten
891ae2e5d5 Merge pull request #230 from kerberos-io/feature/improve-video-format
feature/improve-video-format
2026-02-10 17:25:23 +01:00
Cédric Verstraeten
32b471f570 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:40 +01:00
Cédric Verstraeten
5d745fc989 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:29 +01:00
Cédric Verstraeten
edfa6ec4c6 Update machinery/src/video/mp4.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-10 17:20:16 +01:00
Cédric Verstraeten
0c460efea6 Refactor PR description workflow to include organization variable and correct pull request URL format 2026-02-10 16:17:10 +00:00
Cédric Verstraeten
96df049e59 Enhance MP4 initialization by adding max recording duration parameter, improving placeholder size calculation for segments. 2026-02-10 15:59:59 +00:00
Cédric Verstraeten
2cb454e618 Merge branch 'master' into feature/improve-video-format 2026-02-10 16:57:47 +01:00
Cédric Verstraeten
7f2ebb655e Fix sidx.FirstOffset calculation and re-encode init segment for accurate MP4 structure 2026-02-10 15:56:10 +00:00
Cédric Verstraeten
63857fb5cc Merge pull request #229 from kerberos-io/feature/improve-video-format
feature/improve-video-format
2026-02-10 16:53:34 +01:00
Cédric Verstraeten
f4c75f9aa9 Add environment variables for PR number and project name in workflow 2026-02-10 15:31:37 +00:00
Cédric Verstraeten
c3936dc884 Enhance MP4 segment handling by adding segment durations and base decode times, improving fragment management and data integrity 2026-02-10 14:47:47 +00:00
Cédric Verstraeten
2868ddc499 Add fragment duration handling and improve MP4 segment management 2026-02-10 13:52:58 +00:00
Cédric Verstraeten
176610a694 Update mp4.go 2026-02-10 13:39:55 +01:00
Cédric Verstraeten
f60aff4fd6 Enhance MP4 closing process by adding final video and audio samples, ensuring data integrity and updating track metadata 2026-02-10 12:45:46 +01:00
Cédric Verstraeten
847f62303a Merge pull request #228 from kerberos-io/feature/improve-webrtc-tracing
feature/improve-webrtc-tracing
2026-01-23 15:22:45 +01:00
Cédric Verstraeten
f174e2697e Enhance WebRTC handling with connection management and error logging improvements 2026-01-23 14:16:55 +00:00
Cédric Verstraeten
acac2d5d42 Refactor main function to improve code structure and readability 2026-01-23 13:48:24 +00:00
Cédric Verstraeten
f304c2ed3e Merge pull request #219 from kerberos-io/fix/release-process
fix/release-process
2025-09-17 16:32:58 +02:00
cedricve
2003a38cdc Add release creation workflow with multi-arch Docker builds and artifact handling 2025-09-17 14:32:06 +00:00
Cédric Verstraeten
a67c5a1f39 Merge pull request #216 from kerberos-io/feature/upgrade-build-process-avoid-base
feature/upgrade-build-process-avoid-base
2025-09-11 16:22:53 +02:00
Cédric Verstraeten
b7a87f95e5 Update Docker workflow to use Ubuntu 24.04 and simplify build steps for multi-arch images 2025-09-11 15:00:37 +02:00
Cédric Verstraeten
0aa0b8ad8f Refactor build steps in PR workflow to streamline Docker operations and improve artifact handling 2025-09-11 14:09:22 +02:00
Cédric Verstraeten
2bff868de6 Update upload artifact action to v4 in PR build workflow 2025-09-11 13:45:34 +02:00
Cédric Verstraeten
8b59828126 Add steps to strip binary and upload artifact in PR build workflow 2025-09-11 13:39:27 +02:00
Cédric Verstraeten
f55e25db07 Remove Golang build steps from Dockerfiles for amd64 and arm64 2025-09-11 10:29:05 +02:00
Cédric Verstraeten
243c969666 Add missing go version check in Dockerfile build step 2025-09-11 10:26:54 +02:00
Cédric Verstraeten
ec7f2e0303 Update ARM64 build step to specify Dockerfile for architecture 2025-09-11 10:18:19 +02:00
Cédric Verstraeten
a4a032d994 Update GitHub Actions workflow and Dockerfiles for architecture support and dependency management 2025-09-11 10:17:51 +02:00
Cédric Verstraeten
0a84744e49 Remove arm-v6 architecture from build matrix in PR workflow 2025-09-09 14:38:51 +00:00
Cédric Verstraeten
1425430376 Update .gitignore to include __debug* and change Dockerfile base image to golang:1.24.5-bullseye 2025-09-09 14:36:32 +00:00
Cédric Verstraeten
ca8d88ffce Update GitHub Actions workflow to support multiple architectures in build matrix 2025-09-09 14:34:39 +00:00
Cédric Verstraeten
af3f8bb639 Add GitHub Actions workflow for pull request builds and update Dockerfile dependencies 2025-09-09 16:28:19 +02:00
Cédric Verstraeten
1f9772d472 Merge pull request #212 from kerberos-io/fix/ovrride-base-width
fix/ovrride-base-width
2025-08-12 07:05:43 +02:00
cedricve
94cf361b55 Reset baseWidth and baseHeight in StoreConfig function 2025-08-12 04:47:50 +00:00
cedricve
6acdf258e7 Fix typo in environment variable override function name 2025-08-11 21:10:33 +00:00
cedricve
cc0a810ab3 Handle both baseWidth and baseHeight in IPCamera config
Adds logic to set IPCamera BaseWidth and BaseHeight when both values are provided, instead of only calculating aspect ratio. Also fixes a typo in the function call to override configuration with environment variables.
2025-08-11 23:06:24 +02:00
Cédric Verstraeten
c19bfbe552 Merge pull request #211 from kerberos-io/feature/minimize-sd-view-image
feature/minimize-sd-view-image
2025-08-11 12:30:01 +02:00
Cédric Verstraeten
39aaf5ad6c Merge branch 'feature/minimize-sd-view-image' of github.com:kerberos-io/agent into feature/minimize-sd-view-image 2025-08-11 10:25:31 +00:00
Cédric Verstraeten
6fba2ff05d Refactor logging in gortsplib and mp4 modules to use Debug and Error levels; update free box size in MP4 initialization 2025-08-11 10:20:37 +00:00
Cédric Verstraeten
d78e682759 Update config.json 2025-08-11 11:39:45 +02:00
Cédric Verstraeten
ed582a9d57 Resize polygon coordinates based on IPCamera BaseWidth and BaseHeight configuration 2025-08-11 09:38:24 +00:00
Cédric Verstraeten
aa925d5c9b Add BaseWidth and BaseHeight configuration options for IPCamera; update resizing logic in RunAgent and websocket handlers 2025-08-11 09:23:11 +00:00
Cédric Verstraeten
08d191e542 Update image resizing to support dynamic height; modify related functions and configurations 2025-08-11 08:08:39 +00:00
Cédric Verstraeten
cc075d7237 Refactor IPCamera configuration to include BaseWidth and BaseHeight; update image resizing logic to use dynamic width based on configuration 2025-08-06 14:42:23 +00:00
Cédric Verstraeten
1974bddfbe Merge pull request #210 from kerberos-io/feature/minimize-sd-view-image
feature/minimize-sd-view-image
2025-07-30 15:42:06 +02:00
Cédric Verstraeten
12cb88e1c1 Replace fmt.Println with log.Log.Debug for buffer size in ImageToBytes function 2025-07-30 13:34:14 +00:00
Cédric Verstraeten
c054526998 Add image resizing functionality and update dependencies
- Introduced ResizeImage function to resize images before encoding.
- Updated ImageToBytes function to accept pointer to image.
- Added nfnt/resize library for image resizing.
- Updated go.mod and go.sum to include new dependencies.
- Updated image processing in HandleLiveStreamSD, GetSnapshotRaw, and other functions to use resized images.
- Updated yarn.lock for ui package version change.
2025-07-30 12:06:12 +00:00
Cédric Verstraeten
ffa97598b8 Merge pull request #208 from kerberos-io/feature/increase-chunk-size
feature/increase-chunk-size
2025-07-14 10:07:43 +02:00
cedricve
f5afbf3a63 Add sleep intervals in HandleLiveStreamSD to prevent MQTT flooding 2025-07-14 08:01:35 +00:00
cedricve
e666695c96 Disable live view chunking in configuration and adjust HandleLiveStreamSD function accordingly 2025-07-14 07:59:04 +00:00
Cédric Verstraeten
55816e4b7b Merge pull request #207 from kerberos-io/feature/increase-chunk-size
feature/increase-chunk-size
2025-07-13 22:34:20 +02:00
cedricve
016fb51951 Increase chunk size for live stream handling from 2KB to 25KB 2025-07-13 20:28:32 +00:00
Cédric Verstraeten
550a444650 Merge pull request #206 from kerberos-io/feature/configurable-chunking
feature/configurable-chunking
2025-07-13 22:15:55 +02:00
Cédric Verstraeten
4332e43f27 Update machinery/src/cloud/Cloud.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-13 22:11:49 +02:00
cedricve
fdc3bfb4a4 Add live view chunking configuration to capture settings 2025-07-13 19:47:07 +00:00
cedricve
c17d6b7117 Implement live view chunking configuration for HandleLiveStreamSD function 2025-07-13 19:34:00 +00:00
cedricve
5d7a8103c0 Add Liveview chunking configuration and update WebRTC SDP handling 2025-07-13 19:33:13 +00:00
Cédric Verstraeten
5d7cb98b8f Merge pull request #205 from kerberos-io/feature/upgrade-version
Update main.go
2025-07-13 20:48:58 +02:00
Cédric Verstraeten
f6046c6a6c Update main.go 2025-07-13 20:48:45 +02:00
Cédric Verstraeten
f59f9d71a9 Merge pull request #204 from kerberos-io/feature/jpeg-resolution-chunking
feature/jpeg-resolution-chunking
2025-07-13 20:46:03 +02:00
cedricve
ff72f9647d Update chunk size definition in HandleLiveStreamSD for clarity 2025-07-13 18:21:22 +00:00
cedricve
fa604b16cf Enhance MQTT message structure and logging: add version field to Payload and improve chunked image handling in HandleLiveStreamSD 2025-07-13 16:35:06 +00:00
Cédric Verstraeten
0342869733 Merge pull request #200 from kerberos-io/fix/continue-on-wrong-start-time
fix/continue-on-wrong-start-time
2025-07-05 20:34:31 +02:00
cedricve
8685ce31a2 Add logging for zero startRecording state in HandleRecordStream 2025-07-05 18:31:35 +00:00
Cédric Verstraeten
0e259f0e7a Merge pull request #199 from kerberos-io/feature/new-method-to-calc-pre-recording-start-time
Feature/new method to calc pre recording start time
2025-07-05 17:08:38 +02:00
cedricve
5823abed95 Remove unused DTS extraction code and video stream handling in HandleRecordStream 2025-07-05 15:05:22 +00:00
cedricve
86acff58f0 Refactor HandleRecordStream to improve recording timestamp management and ensure accurate handling of startRecording and motion detection logic 2025-07-05 14:56:24 +00:00
cedricve
d3fc5d4c29 Enhance max recording period calculation in HandleRecordStream to ensure it accommodates preRecording and postRecording values correctly 2025-07-05 14:39:48 +00:00
cedricve
50bb40938c Adjust max recording period checks in HandleRecordStream for improved timing accuracy 2025-07-05 14:32:05 +00:00
cedricve
1977d98ad9 Add CurrentTime field to Packet struct and update HandleRecordStream to use it 2025-07-05 14:24:52 +00:00
Cédric Verstraeten
448d4a946d Merge pull request #198 from kerberos-io/feature/fix-prerecording-duraiton
feature/fix-prerecording-duration
2025-07-04 16:57:01 +02:00
Cédric Verstraeten
61ac314bb7 Fix pre-recording time calculation logic in HandleRecordStream to handle initial recording case correctly 2025-07-04 14:44:13 +00:00
Cédric Verstraeten
c1b144ca28 Fix pre-recording time calculation by adjusting queued packets handling in HandleRecordStream 2025-07-04 14:37:22 +00:00
Cédric Verstraeten
e16987bf9d Refactor HandleRecordStream to improve pre-recording time calculation and adjust display time logic based on available queued packets. 2025-07-04 11:18:46 +00:00
Cédric Verstraeten
9991597984 Merge pull request #197 from kerberos-io/feature/add-duration-to-recordings
feature/add-duration-to-recordings
2025-07-04 09:18:07 +02:00
cedricve
2c0314cea4 Refactor HandleRecordStream to improve file renaming logic and enhance motion detection handling 2025-07-04 06:23:09 +00:00
cedricve
0584e52b98 Refactor HandleRecordStream to optimize pre-recording time calculation and streamline video stream handling 2025-07-03 20:34:18 +00:00
cedricve
1fc90eaee2 Refactor pre-recording time calculation and improve display time logic for better recording accuracy 2025-07-03 20:04:00 +00:00
cedricve
aef3eacbc9 Enhance pre-recording time calculation by incorporating GOP size and FPS; adjust display time and recording conditions based on pre-recording delta. 2025-07-03 17:51:46 +00:00
cedricve
2843568473 Refactor GOP size handling and enhance queue management for improved recording performance 2025-07-03 17:31:37 +00:00
Cédric Verstraeten
53ffc8cae0 Add GOP size configuration and enhance pre-recording handling for improved stream management 2025-07-02 13:28:02 +00:00
Cédric Verstraeten
86e654fe19 Add GOP size tracking and keyframe interval management for improved video processing 2025-07-02 10:51:23 +00:00
Cédric Verstraeten
46d57f7664 Enhance FPS calculation by adding timestamp-based averaging and improved SPS handling; implement debug logging for SPS information. 2025-07-02 09:53:47 +00:00
Cédric Verstraeten
963d8672eb Enhance recording process by adding display time calculation and logging for better tracking; add error handling for MP4 file creation when no samples are present. 2025-07-02 08:54:34 +00:00
Cédric Verstraeten
9b7a62816a Update mp4.go 2025-07-02 09:54:12 +02:00
Cédric Verstraeten
237134fe0e Update recording filename generation to include duration and motion rectangle for improved clarity 2025-07-01 15:03:01 +00:00
Cédric Verstraeten
c8730e8f26 Enhance recording filename generation to include motion rectangle and duration for improved clarity and uniqueness 2025-07-01 12:54:52 +00:00
Cédric Verstraeten
acbbe8b444 Enhance recording filename generation to include milliseconds and its length for improved uniqueness 2025-07-01 12:48:34 +00:00
Cédric Verstraeten
f690016aa5 Refactor motion detection to include motion rectangle and update logging levels for sample addition in MP4 track 2025-07-01 12:37:44 +00:00
Cédric Verstraeten
396cfe5d8b Merge pull request #191 from kerberos-io/feature/migrate-to--mp4ff
feature/Add MP4 video handling and update IPCamera configuration
2025-06-24 13:39:56 +02:00
Cédric Verstraeten
39fe640ccf Refactor logging in AddSampleToTrack method to use structured logging 2025-06-23 10:21:02 +00:00
Cédric Verstraeten
d389c9b0b6 Add logging for sample addition in MP4 track 2025-06-23 10:07:30 +00:00
Cédric Verstraeten
b149686db8 Remove Bento4 build steps and clean up Dockerfile structure 2025-06-23 09:57:04 +00:00
Cédric Verstraeten
c4358cbfad Fix typo in IPCamera struct: update VPSNALUs field JSON tag from "pps_nalus" to "vps_nalus" 2025-06-23 09:03:00 +00:00
Cédric Verstraeten
cfc5bd3dfe Remove unused audio stream retrieval in HandleRecordStream function 2025-06-23 07:58:39 +00:00
Cédric Verstraeten
c29c1b6a92 Merge branch 'master' into feature/migrate-to--mp4ff 2025-06-23 09:55:31 +02:00
Cédric Verstraeten
0f45a2a4b4 Merge branch 'feature/migrate-to--mp4ff' of github.com:kerberos-io/agent into feature/migrate-to--mp4ff 2025-06-23 09:54:41 +02:00
Cédric Verstraeten
92edcc13c0 Refactor OpenTelemetry tracing integration in RTSP client and components for improved context handling 2025-06-23 07:54:34 +00:00
cedricve
5392e2ba90 Update Dockerfile to remove incorrect source path and add Bento4 build process 2025-06-22 19:46:03 +00:00
cedricve
79e1f659c7 Update mongo-driver dependency from v1.17.4 to v1.17.3 to maintain compatibility 2025-06-21 20:13:38 +00:00
cedricve
bf35e5efb6 Implement OpenTelemetry tracing in the agent
- Added OpenTelemetry tracing support in main.go, including a new function startTracing to initialize the tracer with a configurable endpoint.
- Updated the environment attribute from "testing" to "develop" for better clarity in tracing.
- Integrated tracing into the RTSP connection process in gortsplib.go by creating a span for the Connect method.
- Enhanced the Bootstrap function in Kerberos.go to include tracing, marking the start and end of the bootstrap process.
- Introduced a new span in RunAgent to trace the execution flow and ensure proper span management.
2025-06-20 09:35:13 +00:00
Cédric Verstraeten
c50137f255 Comment out OpenTelemetry tracing initialization in main.go to simplify the codebase and remove unused functionality. 2025-06-16 10:30:02 +00:00
Cédric Verstraeten
f12da749b2 Remove OpenTelemetry tracing code from main.go and Kerberos.go files to simplify the codebase and eliminate unused dependencies. 2025-06-16 10:08:55 +00:00
Cédric Verstraeten
a166083423 Update machinery/src/packets/stream.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-16 10:20:43 +02:00
Cédric Verstraeten
b400d4e773 Refactor Dockerfile build commands to streamline Go build process and improve clarity 2025-06-16 06:42:08 +00:00
Cédric Verstraeten
120054d3e5 Add SampleRate and Channels fields to IPCamera configuration and update audio stream handling 2025-06-16 06:37:19 +00:00
cedricve
620117c31b Refactor WriteToTrack to use updated PacketTimestamp for video and audio samples, improving synchronization accuracy. 2025-06-07 22:12:15 +00:00
cedricve
4e371488c1 Remove unnecessary copy of mp4fragment in Dockerfile, streamlining the agent setup process. 2025-06-07 21:22:49 +00:00
cedricve
b154b56308 Refactor Dockerfile to remove CGO_ENABLED=0 from build command, simplifying the build process for the agent. 2025-06-07 21:17:25 +00:00
cedricve
6d92817237 Refactor HandleRecordStream to adjust maxRecordingPeriod calculation for improved timing accuracy. Simplify mp4 segment encoding logic to ensure it always attempts to encode the last segment, enhancing error handling. 2025-06-07 12:30:42 +00:00
cedricve
b8c1855830 Refactor HandleRecordStream to use milliseconds for timing calculations, improving accuracy in recording periods and motion detection logic. Update mp4 encoding to ensure segment encoding only occurs if a segment exists, preventing potential panics. 2025-06-07 11:53:03 +00:00
cedricve
a9f7ff4b72 Refactor HandleRecordStream to remove unused mp4.Movmuxer and streamline video sample handling with mp4Video, enhancing recording process and error logging. 2025-06-07 06:26:23 +00:00
Cédric Verstraeten
b3cd080e14 Refactor Dockerfile and main.go to enhance build process and streamline video handling 2025-06-06 15:14:45 +00:00
Cédric Verstraeten
bfde87f888 Refactor WriteToTrack to improve sample handling by using last processed audio and video samples, enhancing buffer duration calculation and streamlining packet processing. 2025-06-06 14:36:19 +00:00
Cédric Verstraeten
c4453bb8b3 Fix packet handling in WriteToTrack to ensure proper processing of next packets on timeout and empty data 2025-06-06 13:36:30 +00:00
Cédric Verstraeten
40f65a30b3 Clarify audio transcoding process in WriteToTrack with detailed comments on AAC to PCM_MULAW conversion 2025-06-06 13:33:28 +00:00
Cédric Verstraeten
5361de63e0 Refactor packet handling in WriteToTrack to improve buffer duration calculation and streamline packet reading 2025-06-06 13:23:09 +00:00
Cédric Verstraeten
3a8552d362 Enhance MP4 handling by updating track IDs in fragment creation, improving H264 and H265 NAL unit conversion, and adding support for HVC1 compatible brands in the ftyp box 2025-06-05 14:48:19 +00:00
Cédric Verstraeten
d3840103fc Add VPS NALUs support in IPCamera configuration and MP4 handling for improved video processing 2025-06-05 13:28:10 +00:00
Cédric Verstraeten
d12a9f0612 Refactor MP4 handling by simplifying Close method and adding last sample DTS tracking for better audio and video sample management 2025-06-05 10:59:44 +00:00
cedricve
c0d74f7e09 Remove placeholder comments from AddSampleToTrack and Close methods for cleaner code 2025-06-04 19:23:48 +00:00
cedricve
8ebea9e4c5 Refactor MP4 struct by removing unused video and audio fragment fields, and enhance track handling in Close method for better audio and subtitle track management 2025-06-04 19:03:58 +00:00
cedricve
89269caf92 Refactor AddSampleToTrack and SplitAACFrame methods to enhance audio sample handling and improve error logging 2025-06-04 18:36:00 +00:00
Cédric Verstraeten
0c83170f51 Fix AAC descriptor index in Close method to ensure correct audio track setup 2025-06-04 13:15:08 +00:00
Cédric Verstraeten
6081cb4be9 Update mp4.go 2025-06-04 14:39:44 +02:00
Cédric Verstraeten
ea1dbb3087 Refactor AddSampleToTrack method to improve AAC frame handling by splitting frames and updating duration calculations for audio samples 2025-06-04 09:49:29 +00:00
Cédric Verstraeten
0523208d36 Update mp4.go 2025-06-04 11:28:16 +02:00
Cédric Verstraeten
919f21b48b Refactor AddSampleToTrack method to create separate video and audio fragments, enhancing sample handling and improving error logging for AAC frames 2025-06-04 08:45:54 +00:00
cedricve
2c1c10a2ac Refactor AddSampleToTrack and Close methods to improve sample handling and track management for video and audio 2025-06-03 20:33:00 +00:00
cedricve
7e3320b252 Refactor AddSampleToTrack method to remove duration parameter and enhance fragment handling for video and audio tracks 2025-06-03 19:18:16 +00:00
Cédric Verstraeten
35ccac8b65 Refactor MP4 fragment handling in AddSampleToTrack method to separate video and audio fragments for improved track management 2025-06-03 13:29:36 +00:00
Cédric Verstraeten
dad8165d11 Enhance sample handling in AddSampleToTrack method to support multiple packets and improve error logging 2025-06-03 12:30:03 +00:00
Cédric Verstraeten
ba54188de2 Refactor video and audio track handling in MP4 structure to store track names and return track IDs for better management 2025-06-03 10:23:14 +00:00
cedricve
3b440c9905 Add audio and video codec detection in HandleRecordStream function 2025-06-03 06:27:25 +00:00
cedricve
42b98b7f20 Update mp4.go 2025-06-03 08:25:51 +02:00
cedricve
ba3312b57c Refactor AddSampleToTrack method to return error instead of panicking for better error handling 2025-06-03 05:55:23 +00:00
cedricve
223ba255e9 Fix signature handling in MP4 closing logic to ensure valid signatures are used for fingerprint 2025-06-02 17:45:05 +00:00
Cédric Verstraeten
a1df2be207 Implement signing feature with default private key configuration and update MP4 closing logic to include fingerprint signing 2025-06-02 16:02:06 +00:00
Cédric Verstraeten
d7f225ca73 Add signing configuration placeholder to the agent's config 2025-06-02 14:08:47 +00:00
Cédric Verstraeten
b3cfabb5df Update signing configuration to use private key for recording validation 2025-06-02 14:06:16 +00:00
Cédric Verstraeten
5310dd4550 Add signing configuration options to the agent 2025-06-02 13:50:48 +00:00
Cédric Verstraeten
cde7dbb58a Add configuration options for signing recordings and public key usage 2025-06-02 13:41:15 +00:00
Cédric Verstraeten
65e68231c7 Refactor MP4 handling in capture and video modules
- Updated the HandleRecordStream function to use TimeLegacy for packet timestamps instead of the previous Time conversion method.
- Modified the MP4 struct to replace InitSegment with a list of MediaSegments, allowing for better management of segments.
- Introduced StartTime to the MP4 struct to track the creation time of the MP4 file.
- Enhanced the Close method in the MP4 struct to properly handle segment indexing (SIDX) and ensure accurate duration calculations.
- Implemented helper functions to fill SIDX boxes and find segment data, improving the overall structure and readability of the code.
2025-06-02 12:27:22 +00:00
Cédric Verstraeten
5502555869 Integrate OpenTelemetry tracing in main and components, enhancing observability 2025-06-02 07:30:49 +00:00
cedricve
ad6e7e752f Refactor MP4 handling to remove commented-out track additions and enhance moov box management 2025-06-02 07:15:24 +00:00
cedricve
63af4660ef Refactor MP4 initialization and closing logic to improve segment handling and add custom UUID support 2025-06-01 20:07:36 +00:00
cedricve
24fc340001 Refactor MP4 initialization and sample addition logic to enhance duration handling and segment management 2025-05-30 19:06:56 +00:00
cedricve
78d786b69d Add custom UUID box and enhance MP4 file closing logic 2025-05-29 10:14:43 +00:00
cedricve
756aeaa0eb Refactor MP4 handling to improve sample addition and duration calculation 2025-05-28 18:36:34 +00:00
cedricve
055fb67d7a Update mp4.go 2025-05-26 21:59:23 +02:00
cedricve
bee522a6bf Refactor MP4 handling to improve sample addition and segment management 2025-05-26 06:00:17 +00:00
Cédric Verstraeten
3fbf59c622 Merge pull request #192 from kerberos-io/fix/do-not-add-aac-track
fix/add audio codec handling in HandleRecordStream function
2025-05-22 21:07:28 +02:00
cedricve
abd8b8b605 Add audio codec handling in HandleRecordStream function 2025-05-22 18:33:13 +00:00
cedricve
abdad47bf3 Add MP4 video handling and update IPCamera configuration
- Introduced a new video package with MP4 struct for video file handling.
- Updated IPCamera struct to include SPS and PPS NALUs.
- Enhanced stream handling in the capture process to utilize the new MP4 library.
- Added stream index management for better tracking of video and audio streams.
2025-05-22 05:53:33 +00:00
Cédric Verstraeten
d2c24edf5d Merge pull request #190 from kerberos-io/feature/update-workflow-do-not-push-to-latest
Update Docker build workflow to use input tag for image naming
2025-05-20 16:05:04 +02:00
Cédric Verstraeten
22f4a7f119 Update Docker build workflow to use input tag for image naming 2025-05-20 14:03:44 +00:00
Cédric Verstraeten
a25d3d32e4 Merge pull request #189 from kerberos-io/feature/allow-release-workflow-to-triggered-manually
feature/Enhance release workflow to include tag input for Docker image
2025-05-20 14:46:26 +02:00
Cédric Verstraeten
ed68c32e04 Enhance release workflow to include tag input for Docker image 2025-05-20 12:45:52 +00:00
Cédric Verstraeten
4114b3839a Merge pull request #187 from kerberos-io/upgrade/base-image
Update base image version in Dockerfile
2025-05-19 15:22:36 +02:00
Cédric Verstraeten
3f73c009fd Update Dockerfile
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-19 15:15:33 +02:00
Cédric Verstraeten
02fb70c76e Update base image version in Dockerfile 2025-05-19 14:52:28 +02:00
Cédric Verstraeten
aaddcb854d Merge pull request #185 from kerberos-io/feature/retry-windows-secondary-vault
Feature/retry windows secondary vault
2025-05-17 21:40:58 +02:00
cedricve
e73c7a6ecc Remove kstorageRetryPolicy from configuration 2025-05-17 19:37:07 +00:00
cedricve
1dc2202f37 Enhance logging for secondary Kerberos Vault upload process 2025-05-17 19:29:35 +00:00
cedricve
ac710ae1f5 Fix typo in Kerberos Vault max retries translation key 2025-05-17 19:16:27 +00:00
cedricve
f5ea82ff03 Add Kerberos Vault settings for max retries and timeout configuration 2025-05-17 19:14:02 +00:00
cedricve
ef52325240 Update Kerberos Vault configuration for max retries and timeout; adjust upload delay 2025-05-17 08:37:40 +00:00
cedricve
354855feb1 Refactor Kerberos Vault configuration for retry policy consistency 2025-05-17 08:23:32 +00:00
cedricve
c4cd25b588 Add Kerberos Vault configuration options and retry policy support 2025-05-17 08:21:28 +00:00
cedricve
dbb870229e Update config.json 2025-05-16 19:00:33 +02:00
cedricve
a66fe8c054 Merge branch 'master' into feature/retry-windows-secondary-vault 2025-05-16 19:00:13 +02:00
Cédric Verstraeten
2352431c79 Merge pull request #184 from kerberos-io/upgrade/gortsplib
upgrade/dependencies
2025-05-16 18:54:45 +02:00
cedricve
49bc168812 Refactor code structure for improved readability and maintainability 2025-05-16 15:53:40 +00:00
cedricve
98f1ebf20a Add retry policy for Kerberos Vault uploads and update configuration model 2025-05-16 15:50:59 +00:00
cedricve
65feb6d182 Add initial configuration file for agent settings 2025-05-15 12:20:04 +00:00
cedricve
58555d352f Update .gitignore and launch.json to reference .env.local instead of .env 2025-05-15 10:42:01 +00:00
Cédric Verstraeten
839a177cf0 Merge branch 'master' into feature/retry-windows-secondary-vault 2025-05-14 14:57:53 +02:00
Cédric Verstraeten
404517ec40 Merge pull request #183 from kerberos-io/cedricve-patch-1
Create .env
2025-05-14 14:56:46 +02:00
Cédric Verstraeten
035bd18bc2 Create .env 2025-05-14 14:56:31 +02:00
Cédric Verstraeten
8bf7a0d244 Update devcontainer.json 2025-05-14 14:53:41 +02:00
Cédric Verstraeten
607d8fd0d1 Merge pull request #182 from kerberos-io/feature/retry-windows-secondary-vault
Remove .env + config file, we will manually add as these are part of the .gitignore
2025-05-14 14:52:15 +02:00
Cédric Verstraeten
12807e289c remove .env + config file, we will manually add as these are part of the .gitignore 2025-05-14 14:36:16 +02:00
44 changed files with 5421 additions and 1230 deletions

View File

@@ -2,6 +2,10 @@
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "go:1.24-bookworm",
"runArgs": [
"--name=agent",
"--network=host"
],
"dockerFile": "Dockerfile",
"customizations": {
"vscode": {

View File

@@ -1,58 +0,0 @@
name: Docker development build
on:
push:
branches: [develop]
jobs:
build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t kerberos/agent-dev:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
- name: Create new and append to latest manifest
run: docker buildx imagetools create -t kerberos/agent-dev:latest kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
build-other:
runs-on: ubuntu-latest
strategy:
matrix:
#architecture: [arm64, arm/v7, arm/v6]
architecture: [arm64, arm/v7]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t kerberos/agent-dev:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
- name: Create new and append to manifest latest
run: docker buildx imagetools create --append -t kerberos/agent-dev:latest kerberos/agent-dev:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)

View File

@@ -1,113 +0,0 @@
name: Release
on:
release:
types: [created]
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-latest
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}} --push .
- name: Create new and append to manifest
run: docker buildx imagetools create -t $REPO:${{ github.ref_name }} $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}}
- name: Create new and append to manifest latest
run: docker buildx imagetools create -t $REPO:latest $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}}
- name: Run Buildx with output
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-$(echo ${{matrix.architecture}} | tr / -)-${{github.ref_name}} --output type=tar,dest=output-${{matrix.architecture}}.tar .
- name: Strip binary
run: mkdir -p output/ && tar -xf output-${{matrix.architecture}}.tar -C output && rm output-${{matrix.architecture}}.tar && cd output/ && tar -cf ../agent-${{matrix.architecture}}.tar -C home/agent . && rm -rf output
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
allowUpdates: true
name: ${{ github.ref_name }}
tag: ${{ github.ref_name }}
generateReleaseNotes: false
omitBodyDuringUpdate: true
artifacts: "agent-${{matrix.architecture}}.tar"
# Taken from GoReleaser's own release workflow.
# The available Snapcraft Action has some bugs described in the issue below.
# The mkdirs are a hack for https://github.com/goreleaser/goreleaser/issues/1715.
#- name: Setup Snapcraft
# run: |
# sudo apt-get update
# sudo apt-get -yq --no-install-suggests --no-install-recommends install snapcraft
# mkdir -p $HOME/.cache/snapcraft/download
# mkdir -p $HOME/.cache/snapcraft/stage-packages
#- name: Use Snapcraft
# run: tar -xf agent-${{matrix.architecture}}.tar && snapcraft
build-other:
runs-on: ubuntu-latest
permissions:
contents: write
needs: build-amd64
strategy:
matrix:
architecture: [arm64, arm-v7, arm-v6]
#architecture: [arm64, arm-v7]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}} --push .
- name: Create new and append to manifest
run: docker buildx imagetools create --append -t $REPO:${{ github.ref_name }} $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}}
- name: Create new and append to manifest latest
run: docker buildx imagetools create --append -t $REPO:latest $REPO-arch:arch-${{matrix.architecture}}-${{github.ref_name}}
- name: Run Buildx with output
run: docker buildx build --platform linux/$(echo ${{matrix.architecture}} | tr - /) -t $REPO-arch:arch-$(echo ${{matrix.architecture}} | tr / -)-${{github.ref_name}} --output type=tar,dest=output-${{matrix.architecture}}.tar .
- name: Strip binary
run: mkdir -p output/ && tar -xf output-${{matrix.architecture}}.tar -C output && rm output-${{matrix.architecture}}.tar && cd output/ && tar -cf ../agent-${{matrix.architecture}}.tar -C home/agent . && rm -rf output
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
allowUpdates: true
name: ${{ github.ref_name }}
tag: ${{ github.ref_name }}
generateReleaseNotes: false
omitBodyDuringUpdate: true
artifacts: "agent-${{matrix.architecture}}.tar"

View File

@@ -0,0 +1,51 @@
name: Create User Story Issue
on:
workflow_dispatch:
inputs:
issue_title:
description: 'Title for the issue'
required: true
issue_description:
description: 'Brief description of the feature'
required: true
complexity:
description: 'Complexity of the feature'
required: true
type: choice
options:
- 'Low'
- 'Medium'
- 'High'
default: 'Medium'
duration:
description: 'Estimated duration'
required: true
type: choice
options:
- '1 day'
- '3 days'
- '1 week'
- '2 weeks'
- '1 month'
default: '1 week'
jobs:
create-issue:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Create Issue with User Story
uses: cedricve/llm-create-issue-user-story@main
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
azure_openai_api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure_openai_endpoint: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
azure_openai_version: ${{ secrets.AZURE_OPENAI_VERSION }}
openai_model: ${{ secrets.OPENAI_MODEL }}
issue_title: ${{ github.event.inputs.issue_title }}
issue_description: ${{ github.event.inputs.issue_description }}
complexity: ${{ github.event.inputs.complexity }}
duration: ${{ github.event.inputs.duration }}
labels: 'user-story,feature'
assignees: ${{ github.actor }}

View File

@@ -1,12 +1,14 @@
name: Docker nightly build
name: Nightly build
on:
# Triggers the workflow every day at 9PM (CET).
schedule:
- cron: "0 22 * * *"
# Allows manual triggering from the Actions tab.
workflow_dispatch:
jobs:
build-amd64:
nightly-build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
@@ -18,7 +20,9 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
run: git clone https://github.com/kerberos-io/agent && cd agent
uses: actions/checkout@v4
with:
ref: master
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
@@ -26,10 +30,10 @@ jobs:
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: cd agent && docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: cd agent && docker buildx imagetools create -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
build-other:
run: docker buildx imagetools create -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
nightly-build-other:
runs-on: ubuntu-latest
strategy:
matrix:
@@ -41,7 +45,9 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
run: git clone https://github.com/kerberos-io/agent && cd agent
uses: actions/checkout@v4
with:
ref: master
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
@@ -49,6 +55,6 @@ jobs:
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Run Buildx
run: cd agent && docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
run: docker buildx build --platform linux/${{matrix.architecture}} -t kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7) --push .
- name: Create new and append to manifest
run: cd agent && docker buildx imagetools create --append -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)
run: docker buildx imagetools create --append -t kerberos/agent-nightly:$(echo $GITHUB_SHA | cut -c1-7) kerberos/agent-nightly:arch-$(echo ${{matrix.architecture}} | tr / -)-$(echo $GITHUB_SHA | cut -c1-7)

75
.github/workflows/pr-build.yml vendored Normal file
View File

@@ -0,0 +1,75 @@
name: Build pull request
on:
pull_request:
types: [opened, synchronize]
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-24.04
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build -t ${{matrix.architecture}} .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
build-arm64:
runs-on: ubuntu-24.04-arm
permissions:
contents: write
strategy:
matrix:
architecture: [arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build -t ${{matrix.architecture}} -f Dockerfile.arm64 .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar

View File

@@ -2,6 +2,11 @@ name: Autofill PR description
on: pull_request
env:
ORGANIZATION: uugai
PROJECT: ${{ github.event.repository.name }}
PR_NUMBER: ${{ github.event.number }}
jobs:
openai-pr-description:
runs-on: ubuntu-22.04
@@ -16,4 +21,6 @@ jobs:
azure_openai_api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
azure_openai_endpoint: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
azure_openai_version: ${{ secrets.AZURE_OPENAI_VERSION }}
openai_model: ${{ secrets.OPENAI_MODEL }}
pull_request_url: https://pr${{ env.PR_NUMBER }}.api.kerberos.lol
overwrite_description: true

130
.github/workflows/release-create.yml vendored Normal file
View File

@@ -0,0 +1,130 @@
name: Create a new release
on:
release:
types: [created]
workflow_dispatch:
inputs:
tag:
description: "Tag for the Docker image"
required: true
default: "test"
env:
REPO: kerberos/agent
jobs:
build-amd64:
runs-on: ubuntu-24.04
permissions:
contents: write
strategy:
matrix:
architecture: [amd64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build --provenance=false --build-arg VERSION=${{github.event.inputs.tag || github.ref_name}} -t ${{matrix.architecture}} .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Build and push Docker image
run: |
docker tag ${{matrix.architecture}} $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
docker push $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
build-arm64:
runs-on: ubuntu-24.04-arm
permissions:
contents: write
strategy:
matrix:
architecture: [arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Checkout
uses: actions/checkout@v3
- uses: benjlevesque/short-sha@v2.1
id: short-sha
with:
length: 7
- name: Run Build
run: |
docker build --provenance=false --build-arg VERSION=${{github.event.inputs.tag || github.ref_name}} -t ${{matrix.architecture}} -f Dockerfile.arm64 .
CID=$(docker create ${{matrix.architecture}})
docker cp ${CID}:/home/agent ./output-${{matrix.architecture}}
docker rm ${CID}
- name: Strip binary
run: tar -cf agent-${{matrix.architecture}}.tar -C output-${{matrix.architecture}} . && rm -rf output-${{matrix.architecture}}
- name: Build and push Docker image
run: |
docker tag ${{matrix.architecture}} $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
docker push $REPO-arch:arch-${{matrix.architecture}}-${{github.event.inputs.tag || github.ref_name}}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: agent-${{matrix.architecture}}.tar
path: agent-${{matrix.architecture}}.tar
create-manifest:
runs-on: ubuntu-24.04
needs: [build-amd64, build-arm64]
steps:
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Create and push multi-arch manifest
run: |
docker manifest create $REPO:${{ github.event.inputs.tag || github.ref_name }} \
$REPO-arch:arch-amd64-${{github.event.inputs.tag || github.ref_name}} \
$REPO-arch:arch-arm64-${{github.event.inputs.tag || github.ref_name}}
docker manifest push $REPO:${{ github.event.inputs.tag || github.ref_name }}
- name: Create and push latest manifest
run: |
docker manifest create $REPO:latest \
$REPO-arch:arch-amd64-${{github.event.inputs.tag || github.ref_name}} \
$REPO-arch:arch-arm64-${{github.event.inputs.tag || github.ref_name}}
docker manifest push $REPO:latest
if: github.event.inputs.tag == 'test'
create-release:
runs-on: ubuntu-24.04
needs: [build-amd64, build-arm64]
permissions:
contents: write
steps:
- name: Download all artifacts
uses: actions/download-artifact@v4
- name: Create a release
uses: ncipollo/release-action@v1
with:
latest: true
allowUpdates: true
name: ${{ github.event.inputs.tag || github.ref_name }}
tag: ${{ github.event.inputs.tag || github.ref_name }}
generateReleaseNotes: false
omitBodyDuringUpdate: true
artifacts: "agent-*.tar/agent-*.tar"

7
.gitignore vendored
View File

@@ -1,6 +1,8 @@
ui/node_modules
ui/build
ui/public/assets/env.js
.DS_Store
__debug*
.idea
machinery/www
yarn.lock
@@ -10,6 +12,7 @@ machinery/data/recordings
machinery/data/snapshots
machinery/test*
machinery/init-dev.sh
machinery/.env
machinery/.env.local
machinery/vendor
deployments/docker/private-docker-compose.yaml
deployments/docker/private-docker-compose.yaml
video.mp4

2
.vscode/launch.json vendored
View File

@@ -16,7 +16,7 @@
"-port",
"8080"
],
"envFile": "${workspaceFolder}/machinery/.env",
"envFile": "${workspaceFolder}/machinery/.env.local",
"buildFlags": "--tags dynamic",
},
{

View File

@@ -1,6 +1,8 @@
FROM kerberos/base:af04230 AS build-machinery
LABEL AUTHOR=Kerberos.io
ARG BASE_IMAGE_VERSION=amd64-ddbe40e
ARG VERSION=0.0.0
FROM kerberos/base:${BASE_IMAGE_VERSION} AS build-machinery
LABEL AUTHOR=uug.ai
ENV GOROOT=/usr/local/go
ENV GOPATH=/go
@@ -33,7 +35,8 @@ RUN cat /go/src/github.com/kerberos-io/agent/machinery/version
RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
go mod download && \
go build -tags timetzdata,netgo,osusergo --ldflags '-s -w -extldflags "-static -latomic"' main.go && \
VERSION=$(cd /go/src/github.com/kerberos-io/agent && git describe --tags --always 2>/dev/null || echo "${VERSION}") && \
go build -tags timetzdata,netgo,osusergo --ldflags "-s -w -X github.com/kerberos-io/agent/machinery/src/utils.VERSION=${VERSION} -extldflags '-static -latomic'" main.go && \
mkdir -p /agent && \
mv main /agent && \
mv version /agent && \
@@ -43,8 +46,7 @@ RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
mkdir -p /agent/data/log && \
mkdir -p /agent/data/recordings && \
mkdir -p /agent/data/capture-test && \
mkdir -p /agent/data/config && \
rm -rf /go/src/gitlab.com/
mkdir -p /agent/data/config
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
@@ -58,18 +60,6 @@ RUN cp -r /agent ./
RUN /dist/agent/main version
###############################################
# Build Bento4 -> we want fragmented mp4 files
ENV BENTO4_VERSION 1.6.0-641
RUN cd /tmp && git clone https://github.com/axiomatic-systems/Bento4 && cd Bento4 && \
git checkout tags/v${BENTO4_VERSION} && \
cd Build && \
cmake -DCMAKE_BUILD_TYPE=Release .. && \
make && \
mv /tmp/Bento4/Build/mp4fragment /dist/agent/ && \
rm -rf /tmp/Bento4
FROM node:18.14.0-alpine3.16 AS build-ui
RUN apk update && apk upgrade --available && sync
@@ -111,7 +101,6 @@ RUN apk update && apk add ca-certificates curl libstdc++ libc6-compat --no-cache
# Try running agent
RUN mv /agent/* /home/agent/
RUN cp /home/agent/mp4fragment /usr/local/bin/
RUN /home/agent/main version
#######################
@@ -148,4 +137,4 @@ HEALTHCHECK CMD curl --fail http://localhost:80 || exit 1
# Leeeeettttt'ssss goooooo!!!
# Run the shizzle from the right working directory.
WORKDIR /home/agent
CMD ["./main", "-action", "run", "-port", "80"]
CMD ["./main", "-action", "run", "-port", "80"]

140
Dockerfile.arm64 Normal file
View File

@@ -0,0 +1,140 @@
ARG BASE_IMAGE_VERSION=arm64-ddbe40e
ARG VERSION=0.0.0
FROM kerberos/base:${BASE_IMAGE_VERSION} AS build-machinery
LABEL AUTHOR=uug.ai
ENV GOROOT=/usr/local/go
ENV GOPATH=/go
ENV PATH=$GOPATH/bin:$GOROOT/bin:/usr/local/lib:$PATH
ENV GOSUMDB=off
##########################################
# Installing some additional dependencies.
RUN apt-get upgrade -y && apt-get update && apt-get install -y --fix-missing --no-install-recommends \
git build-essential cmake pkg-config unzip libgtk2.0-dev \
curl ca-certificates libcurl4-openssl-dev libssl-dev libjpeg62-turbo-dev && \
rm -rf /var/lib/apt/lists/*
##############################################################################
# Copy all the relevant source code in the Docker image, so we can build this.
RUN mkdir -p /go/src/github.com/kerberos-io/agent
COPY machinery /go/src/github.com/kerberos-io/agent/machinery
RUN rm -rf /go/src/github.com/kerberos-io/agent/machinery/.env
##################################################################
# Get the latest commit hash, so we know which version we're running
COPY .git /go/src/github.com/kerberos-io/agent/.git
RUN cd /go/src/github.com/kerberos-io/agent/.git && git log --format="%H" -n 1 | head -c7 > /go/src/github.com/kerberos-io/agent/machinery/version
RUN cat /go/src/github.com/kerberos-io/agent/machinery/version
##################
# Build Machinery
RUN cd /go/src/github.com/kerberos-io/agent/machinery && \
go mod download && \
VERSION=$(cd /go/src/github.com/kerberos-io/agent && git describe --tags --always 2>/dev/null || echo "${VERSION}") && \
go build -tags timetzdata,netgo,osusergo --ldflags "-s -w -X github.com/kerberos-io/agent/machinery/src/utils.VERSION=${VERSION} -extldflags '-static -latomic'" main.go && \
mkdir -p /agent && \
mv main /agent && \
mv version /agent && \
mv data /agent && \
mkdir -p /agent/data/cloud && \
mkdir -p /agent/data/snapshots && \
mkdir -p /agent/data/log && \
mkdir -p /agent/data/recordings && \
mkdir -p /agent/data/capture-test && \
mkdir -p /agent/data/config
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
# Later, it will be copied as the / (root) of the output image.
WORKDIR /dist
RUN cp -r /agent ./
####################################################################################
# This will collect dependent libraries so they're later copied to the final image.
RUN /dist/agent/main version
FROM node:18.14.0-alpine3.16 AS build-ui
RUN apk update && apk upgrade --available && sync
########################
# Build Web (React app)
RUN mkdir -p /go/src/github.com/kerberos-io/agent/machinery/www
COPY ui /go/src/github.com/kerberos-io/agent/ui
RUN cd /go/src/github.com/kerberos-io/agent/ui && rm -rf yarn.lock && yarn config set network-timeout 300000 && \
yarn && yarn build
####################################
# Let's create a /dist folder containing just the files necessary for runtime.
# Later, it will be copied as the / (root) of the output image.
WORKDIR /dist
RUN mkdir -p ./agent && cp -r /go/src/github.com/kerberos-io/agent/machinery/www ./agent/
############################################
# Publish main binary to GitHub release
FROM alpine:latest
############################
# Protect by non-root user.
RUN addgroup -S kerberosio && adduser -S agent -G kerberosio && addgroup agent video
#################################
# Copy files from previous images
COPY --chown=0:0 --from=build-machinery /dist /
COPY --chown=0:0 --from=build-ui /dist /
RUN apk update && apk add ca-certificates curl libstdc++ libc6-compat --no-cache && rm -rf /var/cache/apk/*
##################
# Try running agent
RUN mv /agent/* /home/agent/
RUN /home/agent/main version
#######################
# Make template config
RUN cp /home/agent/data/config/config.json /home/agent/data/config.template.json
###########################
# Set permissions correctly
RUN chown -R agent:kerberosio /home/agent/data
RUN chown -R agent:kerberosio /home/agent/www
###########################
# Grant the necessary root capabilities to the process trying to bind to the privileged port
RUN apk add libcap && setcap 'cap_net_bind_service=+ep' /home/agent/main
###################
# Run non-root user
USER agent
######################################
# By default the app runs on port 80
EXPOSE 80
######################################
# Check if agent is still running
HEALTHCHECK CMD curl --fail http://localhost:80 || exit 1
###################################################
# Leeeeettttt'ssss goooooo!!!
# Run the shizzle from the right working directory.
WORKDIR /home/agent
CMD ["./main", "-action", "run", "-port", "80"]

View File

@@ -208,6 +208,8 @@ Next to attaching the configuration file, it is also possible to override the co
| `AGENT_REGION_POLYGON` | A single polygon set for motion detection: "x1,y1;x2,y2;x3,y3;... | "" |
| `AGENT_CAPTURE_IPCAMERA_RTSP` | Full-HD RTSP endpoint to the camera you're targetting. | "" |
| `AGENT_CAPTURE_IPCAMERA_SUB_RTSP` | Sub-stream RTSP endpoint used for livestreaming (WebRTC). | "" |
| `AGENT_CAPTURE_IPCAMERA_BASE_WIDTH` | Force a specific width resolution for live view processing. | "" |
| `AGENT_CAPTURE_IPCAMERA_BASE_HEIGHT` | Force a specific height resolution for live view processing. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF` | Mark as a compliant ONVIF device. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_XADDR` | ONVIF endpoint/address running on the camera. | "" |
| `AGENT_CAPTURE_IPCAMERA_ONVIF_USERNAME` | ONVIF username to authenticate against. | "" |
@@ -257,6 +259,9 @@ Next to attaching the configuration file, it is also possible to override the co
| `AGENT_ENCRYPTION_FINGERPRINT` | The fingerprint of the keypair (public/private keys), so you know which one to use. | "" |
| `AGENT_ENCRYPTION_PRIVATE_KEY` | The private key (assymetric/RSA) to decrypt and sign requests send over MQTT. | "" |
| `AGENT_ENCRYPTION_SYMMETRIC_KEY` | The symmetric key (AES) to encrypt and decrypt requests sent over MQTT. | "" |
| `AGENT_SIGNING` | Enable 'true' or disable 'false' for signing recordings. | "true" |
| `AGENT_SIGNING_PRIVATE_KEY` | The private key (RSA) to sign the recordings fingerprint to validate origin. | "" - uses default one if empty |
## Encryption

BIN
machinery/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -1,4 +1,31 @@
AGENT_NAME=mycamera
AGENT_NAME=camera-name
AGENT_KEY=uniq-camera-id
AGENT_TIMEZONE=Europe/Brussels
AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://fake.kerberos.io/stream
AGENT_CAPTURE_CONTINUOUS=true
#AGENT_CAPTURE_CONTINUOUS=true
#AGENT_CAPTURE_IPCAMERA_RTSP=rtsp://fake.kerberos.io/stream
#AGENT_CAPTURE_IPCAMERA_SUB_RTSP=rtsp://fake.kerberos.io/stream
AGENT_CAPTURE_IPCAMERA_ONVIF_XADDR=x.x.x.x
AGENT_CAPTURE_IPCAMERA_ONVIF_USERNAME=xxx
AGENT_CAPTURE_IPCAMERA_ONVIF_PASSWORD=xxx
AGENT_HUB_URI=https://api.cloud.kerberos.io
AGENT_HUB_KEY=AKIXxxx4JBEI
AGENT_HUB_PRIVATE_KEY=DIOXxxxAlYpaxxxxXioL0txxx
AGENT_HUB_SITE=681xxxxxxx9bcda5
# By default will send to Hub (=S3), if you wish to send to Kerberos Vault, set to "kstorage"
AGENT_CLOUD=s3
AGENT_KERBEROSVAULT_URI=
AGENT_KERBEROSVAULT_PROVIDER=
AGENT_KERBEROSVAULT_DIRECTORY=
AGENT_KERBEROSVAULT_ACCESS_KEY=
AGENT_KERBEROSVAULT_SECRET_KEY=
AGENT_KERBEROSVAULT_MAX_RETRIES=10
AGENT_KERBEROSVAULT_TIMEOUT=120
AGENT_KERBEROSVAULT_SECONDARY_URI=
AGENT_KERBEROSVAULT_SECONDARY_PROVIDER=
AGENT_KERBEROSVAULT_SECONDARY_DIRECTORY=
AGENT_KERBEROSVAULT_SECONDARY_ACCESS_KEY=
AGENT_KERBEROSVAULT_SECONDARY_SECRET_KEY=
# Open telemetry tracing endpoint
OTEL_EXPORTER_OTLP_ENDPOINT=

View File

@@ -14,7 +14,9 @@
"ipcamera": {
"rtsp": "",
"sub_rtsp": "",
"fps": ""
"fps": "",
"base_width": 640,
"base_height": 0
},
"usbcamera": {
"device": ""
@@ -26,6 +28,7 @@
"recording": "true",
"snapshots": "true",
"liveview": "true",
"liveview_chunking": "false",
"motion": "true",
"postrecording": 20,
"prerecording": 10,
@@ -116,6 +119,7 @@
"hub_site": "",
"condition_uri": "",
"encryption": {},
"signing": {},
"realtimeprocessing": "false",
"realtimeprocessing_topic": ""
}

View File

@@ -1,21 +1,24 @@
module github.com/kerberos-io/agent/machinery
go 1.24.1
go 1.24.2
replace google.golang.org/genproto => google.golang.org/genproto v0.0.0-20250519155744-55703ea1f237
require (
github.com/Eyevinn/mp4ff v0.48.0
github.com/InVisionApp/conjungo v1.1.0
github.com/appleboy/gin-jwt/v2 v2.10.3
github.com/bluenviron/gortsplib/v4 v4.13.0
github.com/bluenviron/gortsplib/v4 v4.14.1
github.com/bluenviron/mediacommon v1.14.0
github.com/cedricve/go-onvif v0.0.0-20200222191200-567e8ce298f6
github.com/dromara/carbon/v2 v2.6.2
github.com/dromara/carbon/v2 v2.6.8
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/eclipse/paho.mqtt.golang v1.5.0
github.com/elastic/go-sysinfo v1.15.3
github.com/gin-contrib/cors v1.7.5
github.com/gin-contrib/pprof v1.5.3
github.com/gin-gonic/contrib v0.0.0-20250113154928-93b827325fec
github.com/gin-gonic/gin v1.10.0
github.com/gin-gonic/contrib v0.0.0-20250521004450-2b1292699c15
github.com/gin-gonic/gin v1.10.1
github.com/gofrs/uuid v4.4.0+incompatible
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/gorilla/websocket v1.5.3
@@ -23,123 +26,90 @@ require (
github.com/kerberos-io/joy4 v1.0.64
github.com/kerberos-io/onvif v1.0.0
github.com/minio/minio-go/v6 v6.0.57
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/pion/rtp v1.8.13
github.com/pion/webrtc/v4 v4.0.14
github.com/pion/interceptor v0.1.40
github.com/pion/rtp v1.8.19
github.com/pion/webrtc/v4 v4.1.2
github.com/sirupsen/logrus v1.9.3
github.com/swaggo/files v1.0.1
github.com/swaggo/gin-swagger v1.6.0
github.com/swaggo/swag v1.16.4
github.com/tevino/abool v1.2.0
github.com/yapingcat/gomedia v0.0.0-20240906162731-17feea57090c
github.com/zaf/g711 v1.4.0
go.mongodb.org/mongo-driver v1.17.3
gopkg.in/DataDog/dd-trace-go.v1 v1.72.2
go.opentelemetry.io/otel v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0
go.opentelemetry.io/otel/sdk v1.36.0
go.opentelemetry.io/otel/trace v1.36.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
github.com/DataDog/appsec-internal-go v1.9.0 // indirect
github.com/DataDog/datadog-agent/pkg/obfuscate v0.58.0 // indirect
github.com/DataDog/datadog-agent/pkg/proto v0.58.0 // indirect
github.com/DataDog/datadog-agent/pkg/remoteconfig/state v0.58.0 // indirect
github.com/DataDog/datadog-agent/pkg/trace v0.58.0 // indirect
github.com/DataDog/datadog-agent/pkg/util/log v0.58.0 // indirect
github.com/DataDog/datadog-agent/pkg/util/scrubber v0.58.0 // indirect
github.com/DataDog/datadog-go/v5 v5.5.0 // indirect
github.com/DataDog/go-libddwaf/v3 v3.5.1 // indirect
github.com/DataDog/go-runtime-metrics-internal v0.0.4-0.20241206090539-a14610dc22b6 // indirect
github.com/DataDog/go-sqllexer v0.0.14 // indirect
github.com/DataDog/go-tuf v1.1.0-0.5.2 // indirect
github.com/DataDog/gostackparse v0.7.0 // indirect
github.com/DataDog/opentelemetry-mapping-go/pkg/otlp/attributes v0.20.0 // indirect
github.com/DataDog/sketches-go v1.4.5 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beevik/etree v1.2.0 // indirect
github.com/bluenviron/mediacommon/v2 v2.1.0 // indirect
github.com/bluenviron/mediacommon/v2 v2.2.0 // indirect
github.com/bytedance/sonic v1.13.2 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/clbanning/mxj v1.8.4 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/eapache/queue/v2 v2.0.0-20230407133247-75960ed334e4 // indirect
github.com/ebitengine/purego v0.6.0-alpha.5 // indirect
github.com/elastic/go-windows v1.0.2 // indirect
github.com/elgs/gostrgen v0.0.0-20161222160715-9d61ae07eeae // indirect
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5 // indirect
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/gin-contrib/sse v1.0.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/spec v0.20.4 // indirect
github.com/go-openapi/swag v0.19.15 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/pprof v0.0.0-20230817174616-7a8ec2ada47b // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.7 // indirect
github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect
github.com/hashicorp/go-sockaddr v1.0.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/icholy/digest v0.1.23 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/juju/errors v1.0.0 // indirect
github.com/klauspost/compress v1.17.4 // indirect
github.com/klauspost/compress v1.16.7 // indirect
github.com/klauspost/cpuid v1.2.3 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/kylelemons/go-gypsy v1.0.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lib/pq v1.10.2 // indirect
github.com/lufia/plan9stats v0.0.0-20220913051719-115f729f3c8c // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/minio/md5-simd v1.1.0 // indirect
github.com/minio/sha256-simd v0.1.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/nxadm/tail v1.4.11 // indirect
github.com/outcaste-io/ristretto v0.2.3 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/philhofer/fwd v1.1.3-0.20240612014219-fbbf4953d986 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v3 v3.0.4 // indirect
github.com/pion/ice/v4 v4.0.8 // indirect
github.com/pion/interceptor v0.1.37 // indirect
github.com/pion/dtls/v3 v3.0.6 // indirect
github.com/pion/ice/v4 v4.0.10 // indirect
github.com/pion/logging v0.2.3 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/sctp v1.8.37 // indirect
github.com/pion/sdp/v3 v3.0.11 // indirect
github.com/pion/srtp/v3 v3.0.4 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/sdp/v3 v3.0.13 // indirect
github.com/pion/srtp/v3 v3.0.5 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.0.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/power-devops/perfstat v0.0.0-20220216144756-c35f1ee13d7c // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/richardartoul/molecule v1.0.1-0.20240531184615-7ca0df43c0b3 // indirect
github.com/ryanuber/go-glob v1.0.0 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.7.0 // indirect
github.com/shirou/gopsutil/v3 v3.24.4 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/tinylib/msgp v1.2.1 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
@@ -147,35 +117,23 @@ require (
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
github.com/ziutek/mymysql v1.5.4 // indirect
go.opentelemetry.io/collector/component v0.104.0 // indirect
go.opentelemetry.io/collector/config/configtelemetry v0.104.0 // indirect
go.opentelemetry.io/collector/pdata v1.11.0 // indirect
go.opentelemetry.io/collector/pdata/pprofile v0.104.0 // indirect
go.opentelemetry.io/collector/semconv v0.104.0 // indirect
go.opentelemetry.io/otel v1.27.0 // indirect
go.opentelemetry.io/otel/metric v1.27.0 // indirect
go.opentelemetry.io/otel/trace v1.27.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.6.0 // indirect
golang.org/x/arch v0.16.0 // indirect
golang.org/x/crypto v0.37.0 // indirect
golang.org/x/mod v0.20.0 // indirect
golang.org/x/net v0.38.0 // indirect
golang.org/x/oauth2 v0.18.0 // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/text v0.24.0 // indirect
golang.org/x/time v0.6.0 // indirect
golang.org/x/tools v0.24.0 // indirect
golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240520151616-dc85e6b867a5 // indirect
google.golang.org/grpc v1.64.1 // indirect
golang.org/x/crypto v0.38.0 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/tools v0.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/ini.v1 v1.42.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
howett.net/plist v0.0.0-20181124034731-591f970eefbb // indirect

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"flag"
"fmt"
"os"
"time"
@@ -11,48 +12,62 @@ import (
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/onvif"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
configService "github.com/kerberos-io/agent/machinery/src/config"
"github.com/kerberos-io/agent/machinery/src/routers"
"github.com/kerberos-io/agent/machinery/src/utils"
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
"gopkg.in/DataDog/dd-trace-go.v1/profiler"
)
var VERSION = utils.VERSION
func main() {
// You might be interested in debugging the agent.
if os.Getenv("DATADOG_AGENT_ENABLED") == "true" {
if os.Getenv("DATADOG_AGENT_K8S_ENABLED") == "true" {
tracer.Start()
defer tracer.Stop()
} else {
service := os.Getenv("DATADOG_AGENT_SERVICE")
environment := os.Getenv("DATADOG_AGENT_ENVIRONMENT")
log.Log.Info("Starting Datadog Agent with service: " + service + " and environment: " + environment)
rules := []tracer.SamplingRule{tracer.RateRule(1)}
tracer.Start(
tracer.WithSamplingRules(rules),
tracer.WithService(service),
tracer.WithEnv(environment),
)
defer tracer.Stop()
err := profiler.Start(
profiler.WithService(service),
profiler.WithEnv(environment),
profiler.WithProfileTypes(
profiler.CPUProfile,
profiler.HeapProfile,
),
)
if err != nil {
log.Log.Fatal(err.Error())
}
defer profiler.Stop()
}
func startTracing(agentKey string, otelEndpoint string) (*trace.TracerProvider, error) {
serviceName := "agent-" + agentKey
headers := map[string]string{
"content-type": "application/json",
}
exporter, err := otlptrace.New(
context.Background(),
otlptracehttp.NewClient(
otlptracehttp.WithEndpoint(otelEndpoint),
otlptracehttp.WithHeaders(headers),
otlptracehttp.WithInsecure(),
),
)
if err != nil {
return nil, fmt.Errorf("creating new exporter: %w", err)
}
tracerprovider := trace.NewTracerProvider(
trace.WithBatcher(
exporter,
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
trace.WithBatchTimeout(trace.DefaultScheduleDelay*time.Millisecond),
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
),
trace.WithResource(
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
attribute.String("environment", "develop"),
),
),
)
otel.SetTracerProvider(tracerprovider)
return tracerprovider, nil
}
func main() {
// Start the show ;)
// We'll parse the flags (named variables), and start the agent.
@@ -86,35 +101,39 @@ func main() {
switch action {
case "version":
log.Log.Info("main.Main(): You are currrently running Kerberos Agent " + VERSION)
{
log.Log.Info("main.Main(): You are currrently running Kerberos Agent " + VERSION)
}
case "discover":
// Convert duration to int
timeout, err := time.ParseDuration(timeout + "ms")
if err != nil {
log.Log.Fatal("main.Main(): could not parse timeout: " + err.Error())
return
{
// Convert duration to int
timeout, err := time.ParseDuration(timeout + "ms")
if err != nil {
log.Log.Fatal("main.Main(): could not parse timeout: " + err.Error())
return
}
onvif.Discover(timeout)
}
onvif.Discover(timeout)
case "decrypt":
log.Log.Info("main.Main(): Decrypting: " + flag.Arg(0) + " with key: " + flag.Arg(1))
symmetricKey := []byte(flag.Arg(1))
{
log.Log.Info("main.Main(): Decrypting: " + flag.Arg(0) + " with key: " + flag.Arg(1))
symmetricKey := []byte(flag.Arg(1))
if symmetricKey == nil || len(symmetricKey) == 0 {
log.Log.Fatal("main.Main(): symmetric key should not be empty")
return
}
if len(symmetricKey) != 32 {
log.Log.Fatal("main.Main(): symmetric key should be 32 bytes")
return
}
if len(symmetricKey) == 0 {
log.Log.Fatal("main.Main(): symmetric key should not be empty")
return
}
if len(symmetricKey) != 32 {
log.Log.Fatal("main.Main(): symmetric key should be 32 bytes")
return
}
utils.Decrypt(flag.Arg(0), symmetricKey)
utils.Decrypt(flag.Arg(0), symmetricKey)
}
case "run":
{
// Print Kerberos.io ASCII art
// Print Agent ASCII art
utils.PrintASCIIArt()
// Print the environment variables which include "AGENT_" as prefix.
@@ -127,12 +146,29 @@ func main() {
configuration.Name = name
configuration.Port = port
// Open this configuration either from Kerberos Agent or Kerberos Factory.
// Open this configuration either from Agent or Factory.
configService.OpenConfig(configDirectory, &configuration)
// We will override the configuration with the environment variables
configService.OverrideWithEnvironmentVariables(&configuration)
// Start OpenTelemetry tracing
if otelEndpoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"); otelEndpoint == "" {
log.Log.Info("main.Main(): No OpenTelemetry endpoint provided, skipping tracing")
} else {
log.Log.Info("main.Main(): Starting OpenTelemetry tracing with endpoint: " + otelEndpoint)
agentKey := configuration.Config.Key
traceProvider, err := startTracing(agentKey, otelEndpoint)
if err != nil {
log.Log.Error("traceprovider: " + err.Error())
}
defer func() {
if err := traceProvider.Shutdown(context.Background()); err != nil {
log.Log.Error("traceprovider: " + err.Error())
}
}()
}
// Printing final configuration
utils.PrintConfiguration(&configuration)
@@ -175,12 +211,14 @@ func main() {
HandleBootstrap: make(chan string, 1),
}
go components.Bootstrap(configDirectory, &configuration, &communication, &capture)
go components.Bootstrap(ctx, configDirectory, &configuration, &communication, &capture)
// Start the REST API.
routers.StartWebserver(configDirectory, &configuration, &communication, &capture)
}
default:
log.Log.Error("main.Main(): Sorry I don't understand :(")
{
log.Log.Error("main.Main(): Sorry I don't understand :(")
}
}
}

View File

@@ -38,16 +38,16 @@ func (c *Capture) SetBackChannelClient(rtspUrl string) *Golibrtsp {
// RTSPClient is a interface that abstracts the RTSP client implementation.
type RTSPClient interface {
// Connect to the RTSP server.
Connect(ctx context.Context) error
Connect(ctx context.Context, otelContext context.Context) error
// Connect to a backchannel RTSP server.
ConnectBackChannel(ctx context.Context) error
ConnectBackChannel(ctx context.Context, otelContext context.Context) error
// Start the RTSP client, and start reading packets.
Start(ctx context.Context, streamType string, queue *packets.Queue, configuration *models.Configuration, communication *models.Communication) error
// Start the RTSP client, and start reading packets.
StartBackChannel(ctx context.Context) (err error)
StartBackChannel(ctx context.Context, otelContext context.Context) error
// Decode a packet into a image.
DecodePacket(pkt packets.Packet) (image.YCbCr, error)
@@ -59,7 +59,7 @@ type RTSPClient interface {
WritePacket(pkt packets.Packet) error
// Close the connection to the RTSP server.
Close() error
Close(ctx context.Context) error
// Get a list of streams from the RTSP server.
GetStreams() ([]packets.Stream, error)

View File

@@ -33,8 +33,11 @@ import (
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/pion/rtp"
"go.opentelemetry.io/otel"
)
var tracer = otel.Tracer("github.com/kerberos-io/agent/machinery/src/capture")
// Implements the RTSPClient interface.
type Golibrtsp struct {
RTSPClient
@@ -81,6 +84,89 @@ type Golibrtsp struct {
AudioMPEG4Decoder *rtpmpeg4audio.Decoder
Streams []packets.Stream
// Per-stream FPS calculation (keyed by stream index)
fpsTrackers map[int8]*fpsTracker
// I-frame interval tracking fields
packetsSinceLastKeyframe int
lastKeyframePacketCount int
keyframeIntervals []int
keyframeBufferSize int
keyframeBufferIndex int
keyframeMutex sync.Mutex
}
// fpsTracker holds per-stream state for PTS-based FPS calculation.
// Each video stream (H264 / H265) gets its own tracker so PTS
// samples from different codecs never interleave.
type fpsTracker struct {
mu sync.Mutex
lastPTS time.Duration
hasPTS bool
frameTimeBuffer []time.Duration
bufferSize int
bufferIndex int
cachedFPS float64 // latest computed FPS
}
func newFPSTracker(bufferSize int) *fpsTracker {
return &fpsTracker{
frameTimeBuffer: make([]time.Duration, bufferSize),
bufferSize: bufferSize,
}
}
// update records a new PTS sample and returns the latest FPS estimate.
// It must be called once per complete decoded frame (after Decode()
// succeeds), not on every RTP packet fragment.
func (ft *fpsTracker) update(pts time.Duration) float64 {
ft.mu.Lock()
defer ft.mu.Unlock()
if !ft.hasPTS {
ft.lastPTS = pts
ft.hasPTS = true
return 0
}
interval := pts - ft.lastPTS
ft.lastPTS = pts
// Skip invalid intervals (zero, negative, or very large which
// indicate a PTS discontinuity or wrap).
if interval <= 0 || interval > 5*time.Second {
return ft.cachedFPS
}
ft.frameTimeBuffer[ft.bufferIndex] = interval
ft.bufferIndex = (ft.bufferIndex + 1) % ft.bufferSize
var totalInterval time.Duration
validSamples := 0
for _, iv := range ft.frameTimeBuffer {
if iv > 0 {
totalInterval += iv
validSamples++
}
}
if validSamples == 0 {
return ft.cachedFPS
}
avgInterval := totalInterval / time.Duration(validSamples)
if avgInterval == 0 {
return ft.cachedFPS
}
ft.cachedFPS = float64(time.Second) / float64(avgInterval)
return ft.cachedFPS
}
// fps returns the most recent FPS estimate without recording a new sample.
func (ft *fpsTracker) fps() float64 {
ft.mu.Lock()
defer ft.mu.Unlock()
return ft.cachedFPS
}
// Init function
@@ -103,7 +189,10 @@ func init() {
}
// Connect to the RTSP server.
func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
func (g *Golibrtsp) Connect(ctx context.Context, ctxOtel context.Context) (err error) {
_, span := tracer.Start(ctxOtel, "Connect")
defer span.End()
transport := gortsplib.TransportTCP
g.Client = gortsplib.Client{
@@ -131,8 +220,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
return
}
// Iniatlise the mutex.
// Initialize the mutex and FPS calculation.
g.VideoDecoderMutex = &sync.Mutex{}
g.initFPSCalculation()
// find the H264 media and format
var formaH264 *format.H264
@@ -156,7 +246,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
// but try to fetch it later on.
if errSPS != nil {
log.Log.Debug("capture.golibrtsp.Connect(H264): " + errSPS.Error())
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: formaH264.Codec(),
IsVideo: true,
IsAudio: false,
@@ -168,7 +260,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
IsBackChannel: false,
})
} else {
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: formaH264.Codec(),
IsVideo: true,
IsAudio: false,
@@ -216,8 +310,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
log.Log.Info("capture.golibrtsp.Connect(H265): " + err.Error())
return
}
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: formaH265.Codec(),
IsVideo: true,
IsAudio: false,
@@ -265,8 +360,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
log.Log.Error("capture.golibrtsp.Connect(G711): " + err.Error())
} else {
g.AudioG711Decoder = audiortpDec
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: "PCM_MULAW",
IsVideo: false,
IsAudio: true,
@@ -300,8 +396,9 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
log.Log.Error("capture.golibrtsp.Connect(Opus): " + err.Error())
} else {
g.AudioOpusDecoder = audiortpDec
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: "OPUS",
IsVideo: false,
IsAudio: true,
@@ -328,11 +425,15 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
// Something went wrong .. Do something
log.Log.Error("capture.golibrtsp.Connect(MPEG4): " + err.Error())
} else {
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: "AAC",
IsVideo: false,
IsAudio: true,
IsBackChannel: false,
SampleRate: audioFormaMPEG4.Config.SampleRate,
Channels: audioFormaMPEG4.Config.ChannelCount,
})
// Set the index for the audio
@@ -352,7 +453,11 @@ func (g *Golibrtsp) Connect(ctx context.Context) (err error) {
return
}
func (g *Golibrtsp) ConnectBackChannel(ctx context.Context) (err error) {
func (g *Golibrtsp) ConnectBackChannel(ctx context.Context, ctxRunAgent context.Context) (err error) {
_, span := tracer.Start(ctxRunAgent, "ConnectBackChannel")
defer span.End()
// Transport TCP
transport := gortsplib.TransportTCP
g.Client = gortsplib.Client{
@@ -397,7 +502,9 @@ func (g *Golibrtsp) ConnectBackChannel(ctx context.Context) (err error) {
g.HasBackChannel = false
} else {
g.HasBackChannel = true
streamIndex := len(g.Streams)
g.Streams = append(g.Streams, packets.Stream{
Index: streamIndex,
Name: "PCM_MULAW",
IsVideo: false,
IsAudio: true,
@@ -439,6 +546,7 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
Time: pts2,
TimeLegacy: pts,
CompositionTime: pts2,
CurrentTime: time.Now().UnixMilli(),
Idx: g.AudioG711Index,
IsVideo: false,
IsAudio: true,
@@ -480,6 +588,7 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
Time: pts2,
TimeLegacy: pts,
CompositionTime: pts2,
CurrentTime: time.Now().UnixMilli(),
Idx: g.AudioG711Index,
IsVideo: false,
IsAudio: true,
@@ -507,18 +616,17 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
if len(rtppkt.Payload) > 0 {
// decode timestamp
pts, ok := g.Client.PacketPTS(g.VideoH264Media, rtppkt)
pts2, ok := g.Client.PacketPTS2(g.VideoH264Media, rtppkt)
if !ok {
log.Log.Debug("capture.golibrtsp.Start(): " + "unable to get PTS")
// decode timestamps — validate each call separately
pts, okPTS := g.Client.PacketPTS(g.VideoH264Media, rtppkt)
pts2, okPTS2 := g.Client.PacketPTS2(g.VideoH264Media, rtppkt)
if !okPTS2 {
log.Log.Debug("capture.golibrtsp.Start(): unable to get PTS2 from PacketPTS2")
return
}
// Extract access units from RTP packets
// We need to do this, because the decoder expects a full
// access unit. Once we have a full access unit, we can
// decode it, and know if it's a keyframe or not.
// Extract access units from RTP packets.
// We need a complete access unit to determine whether
// this is a keyframe.
au, errDecode := g.VideoH264Decoder.Decode(rtppkt)
if errDecode != nil {
if errDecode != rtph264.ErrNonStartingPacketAndNoPrevious && errDecode != rtph264.ErrMorePacketsNeeded {
@@ -527,6 +635,18 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
return
}
// Frame is complete — update per-stream FPS from PTS.
if okPTS {
ft := g.fpsTrackers[g.VideoH264Index]
if ft == nil {
ft = newFPSTracker(30)
g.fpsTrackers[g.VideoH264Index] = ft
}
if ptsFPS := ft.update(pts); ptsFPS > 0 && ptsFPS <= 120 {
g.Streams[g.VideoH264Index].FPS = ptsFPS
}
}
// We'll need to read out a few things.
// prepend an AUD. This is required by some players
filteredAU = [][]byte{
@@ -537,8 +657,10 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
nonIDRPresent := false
idrPresent := false
var naluTypes []string
for _, nalu := range au {
typ := h264.NALUType(nalu[0] & 0x1F)
naluTypes = append(naluTypes, fmt.Sprintf("%s(%d,sz=%d)", typ.String(), int(typ), len(nalu)))
switch typ {
case h264.NALUTypeAccessUnitDelimiter:
continue
@@ -551,6 +673,9 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
var sps h264.SPS
errSPS := sps.Unmarshal(nalu)
if errSPS == nil {
// Debug SPS information
g.debugSPSInfo(&sps, streamType)
// Get width
g.Streams[g.VideoH264Index].Width = sps.Width()
if streamType == "main" {
@@ -565,21 +690,51 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
} else if streamType == "sub" {
configuration.Config.Capture.IPCamera.SubHeight = sps.Height()
}
// Get FPS
g.Streams[g.VideoH264Index].FPS = sps.FPS()
// Get FPS using enhanced method
fps := g.getEnhancedFPS(&sps, g.VideoH264Index)
g.Streams[g.VideoH264Index].FPS = fps
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.Start(%s): Final FPS=%.2f", streamType, fps))
g.VideoH264Forma.SPS = nalu
if streamType == "main" && len(nalu) > 0 {
// Fallback: store SPS from in-band NALUs when SDP was missing it.
configuration.Config.Capture.IPCamera.SPSNALUs = [][]byte{nalu}
}
}
case h264.NALUTypePPS:
// Read out pps
g.VideoH264Forma.PPS = nalu
if streamType == "main" && len(nalu) > 0 {
// Fallback: store PPS from in-band NALUs when SDP was missing it.
configuration.Config.Capture.IPCamera.PPSNALUs = [][]byte{nalu}
}
}
filteredAU = append(filteredAU, nalu)
}
if idrPresent && streamType == "main" {
// Ensure config has parameter sets before recordings start.
if len(configuration.Config.Capture.IPCamera.SPSNALUs) == 0 && len(g.VideoH264Forma.SPS) > 0 {
configuration.Config.Capture.IPCamera.SPSNALUs = [][]byte{g.VideoH264Forma.SPS}
log.Log.Warning("capture.golibrtsp.Start(main): fallback SPS set from keyframe")
}
if len(configuration.Config.Capture.IPCamera.PPSNALUs) == 0 && len(g.VideoH264Forma.PPS) > 0 {
configuration.Config.Capture.IPCamera.PPSNALUs = [][]byte{g.VideoH264Forma.PPS}
log.Log.Warning("capture.golibrtsp.Start(main): fallback PPS set from keyframe")
}
if len(configuration.Config.Capture.IPCamera.SPSNALUs) == 0 || len(configuration.Config.Capture.IPCamera.PPSNALUs) == 0 {
log.Log.Warning("capture.golibrtsp.Start(main): SPS/PPS still missing after IDR keyframe")
}
}
if len(filteredAU) <= 1 || (!nonIDRPresent && !idrPresent) {
return
}
if idrPresent {
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.Start(%s): IDR frame NALUs: [%s]",
streamType, fmt.Sprintf("%v", naluTypes)))
}
// Convert to packet.
enc, err := h264.AnnexBMarshal(filteredAU)
if err != nil {
@@ -587,19 +742,13 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
return
}
// Extract DTS from RTP packets
//dts2, err := dtsExtractor.Extract(filteredAU, pts2)
//if err != nil {
// log.Log.Error("capture.golibrtsp.Start(): " + err.Error())
// return
//}
pkt := packets.Packet{
IsKeyFrame: idrPresent,
Packet: rtppkt,
Data: enc,
Time: pts2,
TimeLegacy: pts,
CurrentTime: time.Now().UnixMilli(),
CompositionTime: pts2,
Idx: g.VideoH264Index,
IsVideo: true,
@@ -607,6 +756,25 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
Codec: "H264",
}
// Track keyframe intervals
keyframeInterval := g.trackKeyframeInterval(idrPresent)
if idrPresent && keyframeInterval > 0 {
avgInterval := g.getAverageKeyframeInterval()
fps := g.Streams[g.VideoH264Index].FPS
if fps <= 0 {
fps = 25.0 // Default fallback FPS
}
gopDuration := float64(keyframeInterval) / fps
gopSize := int(avgInterval) // Store GOP size in a separate variable
g.Streams[g.VideoH264Index].GopSize = gopSize
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.Start(%s): Keyframe interval=%d packets, Avg=%.1f, GOP=%.1fs, GOPSize=%d",
streamType, keyframeInterval, avgInterval, gopDuration, gopSize))
preRecording := configuration.Config.Capture.PreRecording
if preRecording > 0 && int(gopDuration) > 0 {
queue.SetMaxGopCount(int(preRecording)/int(gopDuration) + 1)
}
}
pkt.Data = pkt.Data[4:]
if pkt.IsKeyFrame {
annexbNALUStartCode := func() []byte { return []byte{0x00, 0x00, 0x00, 0x01} }
@@ -661,18 +829,17 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
if len(rtppkt.Payload) > 0 {
// decode timestamp
pts, ok := g.Client.PacketPTS(g.VideoH265Media, rtppkt)
pts2, ok := g.Client.PacketPTS2(g.VideoH265Media, rtppkt)
if !ok {
log.Log.Debug("capture.golibrtsp.Start(): " + "unable to get PTS")
// decode timestamps — validate each call separately
pts, okPTS := g.Client.PacketPTS(g.VideoH265Media, rtppkt)
pts2, okPTS2 := g.Client.PacketPTS2(g.VideoH265Media, rtppkt)
if !okPTS2 {
log.Log.Debug("capture.golibrtsp.Start(): unable to get PTS")
return
}
// Extract access units from RTP packets
// We need to do this, because the decoder expects a full
// access unit. Once we have a full access unit, we can
// decode it, and know if it's a keyframe or not.
// Extract access units from RTP packets.
// We need a complete access unit to determine whether
// this is a keyframe.
au, errDecode := g.VideoH265Decoder.Decode(rtppkt)
if errDecode != nil {
if errDecode != rtph265.ErrNonStartingPacketAndNoPrevious && errDecode != rtph265.ErrMorePacketsNeeded {
@@ -681,6 +848,18 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
return
}
// Frame is complete — update per-stream FPS from PTS.
if okPTS {
ft := g.fpsTrackers[g.VideoH265Index]
if ft == nil {
ft = newFPSTracker(30)
g.fpsTrackers[g.VideoH265Index] = ft
}
if ptsFPS := ft.update(pts); ptsFPS > 0 && ptsFPS <= 120 {
g.Streams[g.VideoH265Index].FPS = ptsFPS
}
}
filteredAU = [][]byte{
{byte(h265.NALUType_AUD_NUT) << 1, 1, 0x50},
}
@@ -729,6 +908,7 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
Data: enc,
Time: pts2,
TimeLegacy: pts,
CurrentTime: time.Now().UnixMilli(),
CompositionTime: pts2,
Idx: g.VideoH265Index,
IsVideo: true,
@@ -736,6 +916,25 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
Codec: "H265",
}
// Track keyframe intervals for H265
keyframeInterval := g.trackKeyframeInterval(isRandomAccess)
if isRandomAccess && keyframeInterval > 0 {
avgInterval := g.getAverageKeyframeInterval()
fps := g.Streams[g.VideoH265Index].FPS
if fps <= 0 {
fps = 25.0 // Default fallback FPS
}
gopDuration := float64(keyframeInterval) / fps
gopSize := int(avgInterval) // Store GOP size in a separate variable
g.Streams[g.VideoH265Index].GopSize = gopSize
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.Start(%s): Keyframe interval=%d packets, Avg=%.1f, GOP=%.1fs, GOPSize=%d",
streamType, keyframeInterval, avgInterval, gopDuration, gopSize))
preRecording := configuration.Config.Capture.PreRecording
if preRecording > 0 && int(gopDuration) > 0 {
queue.SetMaxGopCount(int(preRecording)/int(gopDuration) + 1)
}
}
queue.WritePacket(pkt)
// This will check if we need to stop the thread,
@@ -778,7 +977,7 @@ func (g *Golibrtsp) Start(ctx context.Context, streamType string, queue *packets
}
// Start the RTSP client, and start reading packets.
func (g *Golibrtsp) StartBackChannel(ctx context.Context) (err error) {
func (g *Golibrtsp) StartBackChannel(ctx context.Context, ctxRunAgent context.Context) (err error) {
log.Log.Info("capture.golibrtsp.StartBackChannel(): started")
// Wait for a second, so we can be sure the stream is playing.
time.Sleep(1 * time.Second)
@@ -860,8 +1059,8 @@ func (g *Golibrtsp) DecodePacketRaw(pkt packets.Packet) (image.Gray, error) {
}
// Get a list of streams from the RTSP server.
func (j *Golibrtsp) GetStreams() ([]packets.Stream, error) {
return j.Streams, nil
func (g *Golibrtsp) GetStreams() ([]packets.Stream, error) {
return g.Streams, nil
}
// Get a list of video streams from the RTSP server.
@@ -887,7 +1086,11 @@ func (g *Golibrtsp) GetAudioStreams() ([]packets.Stream, error) {
}
// Close the connection to the RTSP server.
func (g *Golibrtsp) Close() error {
func (g *Golibrtsp) Close(ctxOtel context.Context) error {
_, span := tracer.Start(ctxOtel, "Close")
defer span.End()
// Close the demuxer.
g.Client.Close()
@@ -1101,3 +1304,149 @@ func WriteMPEG4Audio(forma *format.MPEG4Audio, aus [][]byte) ([]byte, error) {
}
return enc, nil
}
// Initialize FPS calculation buffers
func (g *Golibrtsp) initFPSCalculation() {
// Ensure the per-stream FPS trackers map exists. Individual trackers
// can be created lazily when a given stream index is first used.
if g.fpsTrackers == nil {
g.fpsTrackers = make(map[int8]*fpsTracker)
}
// Initialize I-frame interval tracking
g.keyframeBufferSize = 10 // Store last 10 keyframe intervals
g.keyframeIntervals = make([]int, g.keyframeBufferSize)
g.keyframeBufferIndex = 0
g.packetsSinceLastKeyframe = 0
g.lastKeyframePacketCount = 0
}
// Get enhanced FPS information from SPS with fallback to PTS-based calculation.
// The PTS-based FPS is computed per completed frame via fpsTracker.update(),
// so by the time this is called we already have a good estimate.
func (g *Golibrtsp) getEnhancedFPS(sps *h264.SPS, streamIndex int8) float64 {
// First try to get FPS from SPS VUI parameters
spsFPS := sps.FPS()
// Check if SPS FPS is reasonable (between 1 and 120 fps)
if spsFPS > 0 && spsFPS <= 120 {
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.getEnhancedFPS(): SPS FPS: %.2f", spsFPS))
return spsFPS
}
// Fallback to PTS-based FPS (already calculated per-frame)
if ft := g.fpsTrackers[streamIndex]; ft != nil {
ptsFPS := ft.fps()
if ptsFPS > 0 && ptsFPS <= 120 {
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.getEnhancedFPS(): PTS FPS: %.2f", ptsFPS))
return ptsFPS
}
}
// Return SPS FPS even if it seems unreasonable, or default
if spsFPS > 0 {
return spsFPS
}
return 25.0 // Default fallback FPS
}
// Track I-frame intervals by counting packets between keyframes
func (g *Golibrtsp) trackKeyframeInterval(isKeyframe bool) int {
g.keyframeMutex.Lock()
defer g.keyframeMutex.Unlock()
g.packetsSinceLastKeyframe++
if isKeyframe {
// Store the interval since the last keyframe
if g.lastKeyframePacketCount > 0 {
interval := g.packetsSinceLastKeyframe
g.keyframeIntervals[g.keyframeBufferIndex] = interval
g.keyframeBufferIndex = (g.keyframeBufferIndex + 1) % g.keyframeBufferSize
}
// Reset counter for next interval
g.lastKeyframePacketCount = g.packetsSinceLastKeyframe
g.packetsSinceLastKeyframe = 0
return g.lastKeyframePacketCount
}
return 0
}
// Get average keyframe interval (GOP size)
func (g *Golibrtsp) getAverageKeyframeInterval() float64 {
g.keyframeMutex.Lock()
defer g.keyframeMutex.Unlock()
var totalInterval int
validSamples := 0
for _, interval := range g.keyframeIntervals {
if interval > 0 {
totalInterval += interval
validSamples++
}
}
if validSamples == 0 {
return 0
}
return float64(totalInterval) / float64(validSamples)
}
// Calculate GOP size in seconds based on FPS and keyframe interval
func (g *Golibrtsp) getGOPDuration(fps float64) float64 {
avgInterval := g.getAverageKeyframeInterval()
if avgInterval > 0 && fps > 0 {
return avgInterval / fps
}
return 0
}
// Get detailed SPS timing information
func (g *Golibrtsp) getSPSTimingInfo(sps *h264.SPS) (hasVUI bool, timeScale uint32, numUnitsInTick uint32, fps float64) {
// Try to get FPS from SPS
fps = sps.FPS()
// Note: The gortsplib SPS struct may not expose VUI parameters directly
// but we can still work with the calculated FPS
if fps > 0 {
hasVUI = true
// These are estimated values based on common patterns
if fps == 25.0 {
timeScale = 50
numUnitsInTick = 1
} else if fps == 30.0 {
timeScale = 60
numUnitsInTick = 1
} else if fps == 24.0 {
timeScale = 48
numUnitsInTick = 1
} else {
// Generic calculation
timeScale = uint32(fps * 2)
numUnitsInTick = 1
}
}
return hasVUI, timeScale, numUnitsInTick, fps
}
// Debug SPS information
func (g *Golibrtsp) debugSPSInfo(sps *h264.SPS, streamType string) {
hasVUI, timeScale, numUnitsInTick, fps := g.getSPSTimingInfo(sps)
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.debugSPSInfo(%s): Width=%d, Height=%d",
streamType, sps.Width(), sps.Height()))
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.debugSPSInfo(%s): HasVUI=%t, FPS=%.2f",
streamType, hasVUI, fps))
if hasVUI {
log.Log.Debug(fmt.Sprintf("capture.golibrtsp.debugSPSInfo(%s): TimeScale=%d, NumUnitsInTick=%d",
streamType, timeScale, numUnitsInTick))
}
}

View File

@@ -16,7 +16,8 @@ import (
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/kerberos-io/agent/machinery/src/packets"
"github.com/kerberos-io/agent/machinery/src/utils"
"github.com/yapingcat/gomedia/go-mp4"
"github.com/kerberos-io/agent/machinery/src/video"
"go.opentelemetry.io/otel/trace"
)
func CleanupRecordingDirectory(configDirectory string, configuration *models.Configuration) {
@@ -63,26 +64,46 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
} else {
log.Log.Debug("capture.main.HandleRecordStream(): started")
recordingPeriod := config.Capture.PostRecording // number of seconds to record.
maxRecordingPeriod := config.Capture.MaxLengthRecording // maximum number of seconds to record.
preRecording := config.Capture.PreRecording * 1000
postRecording := config.Capture.PostRecording * 1000 // number of seconds to record.
maxRecordingPeriod := config.Capture.MaxLengthRecording * 1000 // maximum number of seconds to record.
// Synchronise the last synced time
now := time.Now().Unix()
startRecording := now
timestamp := now
// We will calculate the maxRecordingPeriod based on the preRecording and postRecording values.
if maxRecordingPeriod == 0 {
// If maxRecordingPeriod is not set, we will use the preRecording and postRecording values
maxRecordingPeriod = preRecording + postRecording
}
if maxRecordingPeriod < preRecording+postRecording {
log.Log.Error("capture.main.HandleRecordStream(): maxRecordingPeriod is less than preRecording + postRecording, this is not allowed. Setting maxRecordingPeriod to preRecording + postRecording.")
maxRecordingPeriod = preRecording + postRecording
}
if config.FriendlyName != "" {
config.Name = config.FriendlyName
}
// For continuous and motion based recording we will use a single file.
var file *os.File
// Get the audio and video codec from the camera.
// We only expect one audio and one video codec.
// If there are multiple audio or video streams, we will use the first one.
audioCodec := ""
videoCodec := ""
audioStreams, _ := rtspClient.GetAudioStreams()
videoStreams, _ := rtspClient.GetVideoStreams()
if len(audioStreams) > 0 {
audioCodec = audioStreams[0].Name
config.Capture.IPCamera.SampleRate = audioStreams[0].SampleRate
config.Capture.IPCamera.Channels = audioStreams[0].Channels
}
if len(videoStreams) > 0 {
videoCodec = videoStreams[0].Name
}
// Check if continuous recording.
if config.Capture.Continuous == "true" {
//var cws *cacheWriterSeeker
var myMuxer *mp4.Movmuxer
var mp4Video *video.MP4
var videoTrack uint32
var audioTrack uint32
var name string
@@ -90,15 +111,15 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// Do not do anything!
log.Log.Info("capture.main.HandleRecordStream(continuous): start recording")
now = time.Now().Unix()
timestamp = now
start := false
// If continuous record the full length
recordingPeriod = maxRecordingPeriod
postRecording = maxRecordingPeriod
// Recording file name
fullName := ""
var startRecording int64 = 0 // start recording timestamp in milliseconds
// Get as much packets we need.
var cursorError error
var pkt packets.Packet
@@ -114,20 +135,21 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
nextPkt, cursorError = recordingCursor.ReadPacket()
now := time.Now().Unix()
now := time.Now().UnixMilli()
if start && // If already recording and current frame is a keyframe and we should stop recording
nextPkt.IsKeyFrame && (timestamp+recordingPeriod-now <= 0 || now-startRecording >= maxRecordingPeriod) {
nextPkt.IsKeyFrame && (startRecording+postRecording-now <= 0 || now-startRecording > maxRecordingPeriod-500) {
// Write the last packet
ttime := convertPTS(pkt.TimeLegacy)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
if err := myMuxer.Write(videoTrack, pkt.Data, ttime, ttime); err != nil {
// Write the last packet
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
// Write the last packet
if pkt.Codec == "AAC" {
if err := myMuxer.Write(audioTrack, pkt.Data, ttime, ttime); err != nil {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
@@ -136,21 +158,57 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
}
}
// This will write the trailer a well.
if err := myMuxer.WriteTrailer(); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
// Close mp4
if len(mp4Video.SPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.SPSNALUs) > 0 {
mp4Video.SPSNALUs = configuration.Config.Capture.IPCamera.SPSNALUs
}
if len(mp4Video.PPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.PPSNALUs) > 0 {
mp4Video.PPSNALUs = configuration.Config.Capture.IPCamera.PPSNALUs
}
if len(mp4Video.VPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.VPSNALUs) > 0 {
mp4Video.VPSNALUs = configuration.Config.Capture.IPCamera.VPSNALUs
}
if (videoCodec == "H264" && (len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) ||
(videoCodec == "H265" && (len(mp4Video.VPSNALUs) == 0 || len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) {
log.Log.Warning("capture.main.HandleRecordStream(continuous): closing MP4 without full parameter sets, moov may be incomplete")
}
mp4Video.Close(&config)
log.Log.Info("capture.main.HandleRecordStream(continuous): recording finished: file save: " + name)
// Cleanup muxer
start = false
file.Close()
file = nil
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Update the name with the duration in milliseconds.
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" +
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" +
config.Name + "_" +
"0-0-0-0" + "_" + // region coordinates, we
"-1" + "_" + // token
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(continuous): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
@@ -197,7 +255,6 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
}
start = true
timestamp = now
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
// 1564859471_6-474162_oprit_577-283-727-375_1153_27.mp4
@@ -208,13 +265,17 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// - Number of changes
// - Token
startRecording = time.Now().Unix() // we mark the current time when the record started.ss
s := strconv.FormatInt(startRecording, 10) + "_" +
"6" + "-" +
"967003" + "_" +
config.Name + "_" +
"200-200-400-400" + "_0_" +
"769"
startRecording = pkt.CurrentTime
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" + // start timestamp in seconds
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" + // length of milliseconds
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" + // milliseconds
config.Name + "_" + // device name
"0-0-0-0" + "_" + // region coordinates, we will not use this for continuous recording
"0" + "_" + // token
"0" + "_" //+ // duration of recording in milliseconds
//utils.VERSION // version of the agent
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
@@ -222,53 +283,64 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// Running...
log.Log.Info("capture.main.HandleRecordStream(continuous): recording started")
file, err = os.Create(fullName)
if err == nil {
//cws = newCacheWriterSeeker(4096)
myMuxer, _ = mp4.CreateMp4Muxer(file)
// We choose between H264 and H265
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
widthOption := mp4.WithVideoWidth(uint32(width))
heightOption := mp4.WithVideoHeight(uint32(height))
if pkt.Codec == "H264" {
videoTrack = myMuxer.AddVideoTrack(mp4.MP4_CODEC_H264, widthOption, heightOption)
} else if pkt.Codec == "H265" {
videoTrack = myMuxer.AddVideoTrack(mp4.MP4_CODEC_H265, widthOption, heightOption)
}
// For an MP4 container, AAC is the only audio codec supported.
audioTrack = myMuxer.AddAudioTrack(mp4.MP4_CODEC_AAC)
} else {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
// Get width and height from the camera.
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
// Get SPS and PPS NALUs from the camera.
spsNALUS := configuration.Config.Capture.IPCamera.SPSNALUs
ppsNALUS := configuration.Config.Capture.IPCamera.PPSNALUs
vpsNALUS := configuration.Config.Capture.IPCamera.VPSNALUs
if len(spsNALUS) == 0 || len(ppsNALUS) == 0 {
log.Log.Warning("capture.main.HandleRecordStream(continuous): missing SPS/PPS at recording start")
}
// Create a video file, and set the dimensions.
mp4Video = video.NewMP4(fullName, spsNALUS, ppsNALUS, vpsNALUS, configuration.Config.Capture.MaxLengthRecording)
mp4Video.SetWidth(width)
mp4Video.SetHeight(height)
if videoCodec == "H264" {
videoTrack = mp4Video.AddVideoTrack("H264")
} else if videoCodec == "H265" {
videoTrack = mp4Video.AddVideoTrack("H265")
}
if audioCodec == "AAC" {
audioTrack = mp4Video.AddAudioTrack("AAC")
} else if audioCodec == "PCM_MULAW" {
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
ttime := convertPTS(pkt.TimeLegacy)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
if err := myMuxer.Write(videoTrack, pkt.Data, ttime, ttime); err != nil {
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
if pkt.Codec == "AAC" {
if err := myMuxer.Write(audioTrack, pkt.Data, ttime, ttime); err != nil {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
// We might need to use ffmpeg to transcode the audio to AAC.
// For now we will skip the audio track.
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
}
recordingStatus = "started"
} else if start {
ttime := convertPTS(pkt.TimeLegacy)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
if err := myMuxer.Write(videoTrack, pkt.Data, ttime, ttime); err != nil {
// New method using new mp4 library
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.IsAudio {
if pkt.Codec == "AAC" {
if err := myMuxer.Write(audioTrack, pkt.Data, ttime, ttime); err != nil {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(continuous): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
@@ -277,7 +349,6 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
}
}
}
pkt = nextPkt
}
@@ -285,21 +356,43 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// If this happens we need to check to properly close the recording.
if cursorError != nil {
if recordingStatus == "started" {
// This will write the trailer a well.
if err := myMuxer.WriteTrailer(); err != nil {
log.Log.Error(err.Error())
}
log.Log.Info("capture.main.HandleRecordStream(continuous): Recording finished: file save: " + name)
// Cleanup muxer
start = false
file.Close()
file = nil
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Update the name with the duration in milliseconds.
startRecordingSeconds := startRecording / 1000 // convert to seconds
startRecordingMilliseconds := startRecording % 1000 // convert to milliseconds
s := strconv.FormatInt(startRecordingSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(startRecordingMilliseconds, 10))) + "-" +
strconv.FormatInt(startRecordingMilliseconds, 10) + "_" +
config.Name + "_" +
"0-0-0-0" + "_" + // region coordinates, we
"-1" + "_" + // token
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(continuous): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
@@ -337,33 +430,44 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
log.Log.Info("capture.main.HandleRecordStream(motiondetection): Start motion based recording ")
var lastDuration int64
var lastRecordingTime int64
var lastRecordingTime int64 = 0 // last recording timestamp in milliseconds
var displayTime int64 = 0 // display time in milliseconds
//var cws *cacheWriterSeeker
var myMuxer *mp4.Movmuxer
var videoTrack uint32
var audioTrack uint32
for motion := range communication.HandleMotion {
timestamp = time.Now().Unix()
startRecording = time.Now().Unix() // we mark the current time when the record started.
numberOfChanges := motion.NumberOfChanges
// Get as much packets we need.
var cursorError error
var pkt packets.Packet
var nextPkt packets.Packet
recordingCursor := queue.Oldest() // Start from the latest packet in the queue)
// If we have prerecording we will substract the number of seconds.
// Taking into account FPS = GOP size (Keyfram interval)
if config.Capture.PreRecording > 0 {
now := time.Now().UnixMilli()
motionTimestamp := now
// Might be that recordings are coming short after each other.
// Therefore we do some math with the current time and the last recording time.
start := false
timeBetweenNowAndLastRecording := startRecording - lastRecordingTime
if timeBetweenNowAndLastRecording > int64(config.Capture.PreRecording) {
startRecording = startRecording - int64(config.Capture.PreRecording) + 1
} else {
startRecording = startRecording - timeBetweenNowAndLastRecording
}
if cursorError == nil {
pkt, cursorError = recordingCursor.ReadPacket()
}
displayTime = pkt.CurrentTime
startRecording := pkt.CurrentTime
// We have more packets in the queue (which might still be older than where we close the previous recording).
// In that case we will use the last recording time to determine the start time of the recording, otherwise
// we will have duplicate frames in the recording.
if startRecording < lastRecordingTime {
displayTime = lastRecordingTime
startRecording = lastRecordingTime
}
// If startRecording is 0, we will continue as it might be we are in a state of restarting the agent.
if startRecording == 0 {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): startRecording is 0, we will continue as it might be we are in a state of restarting the agent.")
continue
}
// timestamp_microseconds_instanceName_regionCoordinates_numberOfChanges_token
@@ -375,47 +479,59 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// - Number of changes
// - Token
s := strconv.FormatInt(startRecording, 10) + "_" +
"6" + "-" +
"967003" + "_" +
config.Name + "_" +
"200-200-400-400" + "_" +
strconv.Itoa(numberOfChanges) + "_" +
"769"
displayTimeSeconds := displayTime / 1000 // convert to seconds
displayTimeMilliseconds := displayTime % 1000 // convert to milliseconds
motionRectangleString := "0-0-0-0"
if motion.Rectangle.X != 0 || motion.Rectangle.Y != 0 ||
motion.Rectangle.Width != 0 || motion.Rectangle.Height != 0 {
motionRectangleString = strconv.Itoa(motion.Rectangle.X) + "-" + strconv.Itoa(motion.Rectangle.Y) + "-" +
strconv.Itoa(motion.Rectangle.Width) + "-" + strconv.Itoa(motion.Rectangle.Height)
}
// Get the number of changes from the motion detection.
numberOfChanges := motion.NumberOfChanges
s := strconv.FormatInt(displayTimeSeconds, 10) + "_" + // start timestamp in seconds
strconv.Itoa(len(strconv.FormatInt(displayTimeMilliseconds, 10))) + "-" + // length of milliseconds
strconv.FormatInt(displayTimeMilliseconds, 10) + "_" + // milliseconds
config.Name + "_" + // device name
motionRectangleString + "_" + // region coordinates, we will not use this for continuous recording
strconv.Itoa(numberOfChanges) + "_" + // number of changes
"0" // + "_" + // duration of recording in milliseconds
//utils.VERSION // version of the agent
name := s + ".mp4"
fullName := configDirectory + "/data/recordings/" + name
// Running...
log.Log.Info("capture.main.HandleRecordStream(motiondetection): recording started")
file, _ = os.Create(fullName)
myMuxer, _ = mp4.CreateMp4Muxer(file)
log.Log.Info("capture.main.HandleRecordStream(motiondetection): recording started (" + name + ")" + " at " + strconv.FormatInt(displayTimeSeconds, 10) + " unix")
// Check which video codec we need to use.
videoSteams, _ := rtspClient.GetVideoStreams()
for _, stream := range videoSteams {
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
widthOption := mp4.WithVideoWidth(uint32(width))
heightOption := mp4.WithVideoHeight(uint32(height))
if stream.Name == "H264" {
videoTrack = myMuxer.AddVideoTrack(mp4.MP4_CODEC_H264, widthOption, heightOption)
} else if stream.Name == "H265" {
videoTrack = myMuxer.AddVideoTrack(mp4.MP4_CODEC_H265, widthOption, heightOption)
}
// Get width and height from the camera.
width := configuration.Config.Capture.IPCamera.Width
height := configuration.Config.Capture.IPCamera.Height
// Get SPS and PPS NALUs from the camera.
spsNALUS := configuration.Config.Capture.IPCamera.SPSNALUs
ppsNALUS := configuration.Config.Capture.IPCamera.PPSNALUs
vpsNALUS := configuration.Config.Capture.IPCamera.VPSNALUs
if len(spsNALUS) == 0 || len(ppsNALUS) == 0 {
log.Log.Warning("capture.main.HandleRecordStream(motiondetection): missing SPS/PPS at recording start")
}
// For an MP4 container, AAC is the only audio codec supported.
audioTrack = myMuxer.AddAudioTrack(mp4.MP4_CODEC_AAC)
start := false
// Create a video file, and set the dimensions.
mp4Video := video.NewMP4(fullName, spsNALUS, ppsNALUS, vpsNALUS, configuration.Config.Capture.MaxLengthRecording)
mp4Video.SetWidth(width)
mp4Video.SetHeight(height)
// Get as much packets we need.
var cursorError error
var pkt packets.Packet
var nextPkt packets.Packet
recordingCursor := queue.DelayedGopCount(int(config.Capture.PreRecording + 1))
if cursorError == nil {
pkt, cursorError = recordingCursor.ReadPacket()
if videoCodec == "H264" {
videoTrack = mp4Video.AddVideoTrack("H264")
} else if videoCodec == "H265" {
videoTrack = mp4Video.AddVideoTrack("H265")
}
if audioCodec == "AAC" {
audioTrack = mp4Video.AddAudioTrack("AAC")
} else if audioCodec == "PCM_MULAW" {
log.Log.Debug("capture.main.HandleRecordStream(continuous): no AAC audio codec detected, skipping audio track.")
}
for cursorError == nil {
@@ -425,68 +541,104 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + cursorError.Error())
}
now := time.Now().Unix()
now = time.Now().UnixMilli()
select {
case motion := <-communication.HandleMotion:
timestamp = now
motionTimestamp = now
log.Log.Info("capture.main.HandleRecordStream(motiondetection): motion detected while recording. Expanding recording.")
numberOfChanges = motion.NumberOfChanges
numberOfChanges := motion.NumberOfChanges
log.Log.Info("capture.main.HandleRecordStream(motiondetection): Received message with recording data, detected changes to save: " + strconv.Itoa(numberOfChanges))
default:
}
if (timestamp+recordingPeriod-now < 0 || now-startRecording > maxRecordingPeriod) && nextPkt.IsKeyFrame {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): closing recording (timestamp: " + strconv.FormatInt(timestamp, 10) + ", recordingPeriod: " + strconv.FormatInt(recordingPeriod, 10) + ", now: " + strconv.FormatInt(now, 10) + ", startRecording: " + strconv.FormatInt(startRecording, 10) + ", maxRecordingPeriod: " + strconv.FormatInt(maxRecordingPeriod, 10))
if (motionTimestamp+postRecording-now < 0 || now-startRecording > maxRecordingPeriod-500) && nextPkt.IsKeyFrame {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): timestamp+postRecording-now < 0 - " + strconv.FormatInt(motionTimestamp+postRecording-now, 10) + " < 0")
log.Log.Info("capture.main.HandleRecordStream(motiondetection): now-startRecording > maxRecordingPeriod-500 - " + strconv.FormatInt(now-startRecording, 10) + " > " + strconv.FormatInt(maxRecordingPeriod-500, 10))
log.Log.Info("capture.main.HandleRecordStream(motiondetection): closing recording (timestamp: " + strconv.FormatInt(motionTimestamp, 10) + ", postRecording: " + strconv.FormatInt(postRecording, 10) + ", now: " + strconv.FormatInt(now, 10) + ", startRecording: " + strconv.FormatInt(startRecording, 10) + ", maxRecordingPeriod: " + strconv.FormatInt(maxRecordingPeriod, 10))
break
}
if pkt.IsKeyFrame && !start && pkt.Time >= lastDuration {
if pkt.IsKeyFrame && !start && pkt.CurrentTime >= startRecording {
// We start the recording if we have a keyframe and the last duration is 0 or less than the current packet time.
// It could be start we start from the beginning of the recording.
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): write frames")
start = true
}
if start {
ttime := convertPTS(pkt.TimeLegacy)
pts := convertPTS(pkt.TimeLegacy)
if pkt.IsVideo {
if err := myMuxer.Write(videoTrack, pkt.Data, ttime, ttime); err != nil {
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): add video sample")
if err := mp4Video.AddSampleToTrack(videoTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + err.Error())
}
} else if pkt.IsAudio {
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): add audio sample")
if pkt.Codec == "AAC" {
if err := myMuxer.Write(audioTrack, pkt.Data, ttime, ttime); err != nil {
if err := mp4Video.AddSampleToTrack(audioTrack, pkt.IsKeyFrame, pkt.Data, pts); err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + err.Error())
}
} else if pkt.Codec == "PCM_MULAW" {
// TODO: transcode to AAC, some work to do..
// We might need to use ffmpeg to transcode the audio to AAC.
// For now we will skip the audio track.
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): no AAC audio codec detected, skipping audio track.")
}
}
// We will sync to file every keyframe.
if pkt.IsKeyFrame {
err := file.Sync()
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): " + err.Error())
} else {
log.Log.Debug("capture.main.HandleRecordStream(motiondetection): synced file " + name)
}
}
}
pkt = nextPkt
}
// This will write the trailer a well.
myMuxer.WriteTrailer()
// Update the last duration and last recording time.
// This is used to determine if we need to start a new recording.
lastRecordingTime = pkt.CurrentTime
// This will close the recording and write the last packet.
if len(mp4Video.SPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.SPSNALUs) > 0 {
mp4Video.SPSNALUs = configuration.Config.Capture.IPCamera.SPSNALUs
}
if len(mp4Video.PPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.PPSNALUs) > 0 {
mp4Video.PPSNALUs = configuration.Config.Capture.IPCamera.PPSNALUs
}
if len(mp4Video.VPSNALUs) == 0 && len(configuration.Config.Capture.IPCamera.VPSNALUs) > 0 {
mp4Video.VPSNALUs = configuration.Config.Capture.IPCamera.VPSNALUs
}
if (videoCodec == "H264" && (len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) ||
(videoCodec == "H265" && (len(mp4Video.VPSNALUs) == 0 || len(mp4Video.SPSNALUs) == 0 || len(mp4Video.PPSNALUs) == 0)) {
log.Log.Warning("capture.main.HandleRecordStream(motiondetection): closing MP4 without full parameter sets, moov may be incomplete")
}
mp4Video.Close(&config)
log.Log.Info("capture.main.HandleRecordStream(motiondetection): file save: " + name)
lastDuration = pkt.Time
lastRecordingTime = time.Now().Unix()
file.Close()
file = nil
// Update the name of the recording with the duration.
// We will update the name of the recording with the duration in milliseconds.
if mp4Video.VideoTotalDuration > 0 {
duration := mp4Video.VideoTotalDuration
// Check if need to convert to fragmented using bento
if config.Capture.Fragmented == "true" && config.Capture.FragmentedDuration > 0 {
utils.CreateFragmentedMP4(fullName, config.Capture.FragmentedDuration)
// Update the name with the duration in milliseconds.
s := strconv.FormatInt(displayTimeSeconds, 10) + "_" +
strconv.Itoa(len(strconv.FormatInt(displayTimeMilliseconds, 10))) + "-" +
strconv.FormatInt(displayTimeMilliseconds, 10) + "_" +
config.Name + "_" +
motionRectangleString + "_" +
strconv.Itoa(numberOfChanges) + "_" + // number of changes
strconv.FormatInt(int64(duration), 10) // + "_" + // duration of recording in milliseconds
//utils.VERSION // version of the agent
oldName := name
name = s + ".mp4"
fullName = configDirectory + "/data/recordings/" + name
log.Log.Info("capture.main.HandleRecordStream(motiondetection): renamed file from: " + oldName + " to: " + name)
// Rename the file to the new name.
err := os.Rename(
configDirectory+"/data/recordings/"+oldName,
configDirectory+"/data/recordings/"+s+".mp4")
if err != nil {
log.Log.Error("capture.main.HandleRecordStream(motiondetection): error renaming file: " + err.Error())
}
} else {
log.Log.Info("capture.main.HandleRecordStream(motiondetection): no video data recorded, not renaming file.")
}
// Check if we need to encrypt the recording.
@@ -534,6 +686,10 @@ func HandleRecordStream(queue *packets.Queue, configDirectory string, configurat
// @Success 200 {object} models.APIResponse
func VerifyCamera(c *gin.Context) {
// Start OpenTelemetry tracing
ctxVerifyCamera, span := tracer.Start(context.Background(), "VerifyCamera", trace.WithSpanKind(trace.SpanKindServer))
defer span.End()
var cameraStreams models.CameraStreams
err := c.BindJSON(&cameraStreams)
@@ -559,12 +715,11 @@ func VerifyCamera(c *gin.Context) {
Url: rtspUrl,
}
err := rtspClient.Connect(ctx)
err := rtspClient.Connect(ctx, ctxVerifyCamera)
if err == nil {
// Get the streams from the rtsp client.
streams, _ := rtspClient.GetStreams()
videoIdx := -1
audioIdx := -1
for i, stream := range streams {
@@ -575,7 +730,7 @@ func VerifyCamera(c *gin.Context) {
}
}
err := rtspClient.Close()
err := rtspClient.Close(ctxVerifyCamera)
if err == nil {
if videoIdx > -1 {
c.JSON(200, models.APIResponse{
@@ -604,7 +759,7 @@ func VerifyCamera(c *gin.Context) {
}
}
func Base64Image(captureDevice *Capture, communication *models.Communication) string {
func Base64Image(captureDevice *Capture, communication *models.Communication, configuration *models.Configuration) string {
// We'll try to get a snapshot from the camera.
var queue *packets.Queue
var cursor *packets.QueueCursor
@@ -634,7 +789,8 @@ func Base64Image(captureDevice *Capture, communication *models.Communication) st
var img image.YCbCr
img, err = (*rtspClient).DecodePacket(pkt)
if err == nil {
bytes, _ := utils.ImageToBytes(&img)
imageResized, _ := utils.ResizeImage(&img, uint(configuration.Config.Capture.IPCamera.BaseWidth), uint(configuration.Config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
encodedImage = base64.StdEncoding.EncodeToString(bytes)
break
} else {
@@ -695,6 +851,6 @@ func convertPTS(v time.Duration) uint64 {
return uint64(v.Milliseconds())
}
func convertPTS2(v int64) uint64 {
/*func convertPTS2(v int64) uint64 {
return uint64(v) / 100
}
}*/

View File

@@ -131,7 +131,7 @@ func HandleUpload(configDirectory string, configuration *models.Configuration, c
log.Log.Error("HandleUpload: " + err.Error())
}
} else {
delay = 20 * time.Second // slow down
delay = 5 * time.Second // slow down
if err != nil {
log.Log.Error("HandleUpload: " + err.Error())
}
@@ -672,6 +672,7 @@ func HandleLiveStreamSD(livestreamCursor *packets.QueueCursor, configuration *mo
// Check if we need to enable the live stream
if config.Capture.Liveview != "false" {
deviceId := config.Key
hubKey := ""
if config.Cloud == "s3" && config.S3 != nil && config.S3.Publickey != "" {
hubKey = config.S3.Publickey
@@ -705,25 +706,79 @@ func HandleLiveStreamSD(livestreamCursor *packets.QueueCursor, configuration *mo
log.Log.Info("cloud.HandleLiveStreamSD(): Sending base64 encoded images to MQTT.")
img, err := rtspClient.DecodePacket(pkt)
if err == nil {
bytes, _ := utils.ImageToBytes(&img)
encoded := base64.StdEncoding.EncodeToString(bytes)
imageResized, _ := utils.ResizeImage(&img, uint(config.Capture.IPCamera.BaseWidth), uint(config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
valueMap := make(map[string]interface{})
valueMap["image"] = encoded
message := models.Message{
Payload: models.Payload{
Action: "receive-sd-stream",
DeviceId: configuration.Config.Key,
Value: valueMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 0, false, payload)
chunking := config.Capture.LiveviewChunking
if chunking == "true" {
// Split encoded image into chunks of 2kb
// This is to prevent the MQTT message to be too large.
// By default, bytes are not encoded to base64 here; you are splitting the raw JPEG/PNG bytes.
// However, in MQTT and web contexts, binary data may not be handled well, so base64 is often used.
// To avoid base64 encoding, just send the raw []byte chunks as you do here.
// If you want to avoid base64, make sure the receiver can handle binary payloads.
chunkSize := 25 * 1024 // 25KB chunks
var chunks [][]byte
for i := 0; i < len(bytes); i += chunkSize {
end := i + chunkSize
if end > len(bytes) {
end = len(bytes)
}
chunk := bytes[i:end]
chunks = append(chunks, chunk)
}
log.Log.Infof("cloud.HandleLiveStreamSD(): Sending %d chunks of size %d bytes.", len(chunks), chunkSize)
timestamp := time.Now().Unix()
for i, chunk := range chunks {
valueMap := make(map[string]interface{})
valueMap["id"] = timestamp
valueMap["chunk"] = chunk
valueMap["chunkIndex"] = i
valueMap["chunkSize"] = chunkSize
valueMap["chunkCount"] = len(chunks)
message := models.Message{
Payload: models.Payload{
Version: "v1.0.0",
Action: "receive-sd-stream",
DeviceId: deviceId,
Value: valueMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey+"/"+deviceId, 1, false, payload)
log.Log.Infof("cloud.HandleLiveStreamSD(): sent chunk %d/%d to MQTT topic kerberos/hub/%s/%s", i+1, len(chunks), hubKey, deviceId)
time.Sleep(33 * time.Millisecond) // Sleep to avoid flooding the MQTT broker with messages
} else {
log.Log.Info("cloud.HandleLiveStreamSD(): something went wrong while sending acknowledge config to hub: " + string(payload))
}
}
} else {
log.Log.Info("cloud.HandleLiveStreamSD(): something went wrong while sending acknowledge config to hub: " + string(payload))
valueMap := make(map[string]interface{})
valueMap["image"] = bytes
message := models.Message{
Payload: models.Payload{
Action: "receive-sd-stream",
DeviceId: configuration.Config.Key,
Value: valueMap,
},
}
payload, err := models.PackageMQTTMessage(configuration, message)
if err == nil {
mqttClient.Publish("kerberos/hub/"+hubKey, 0, false, payload)
} else {
log.Log.Info("cloud.HandleLiveStreamSD(): something went wrong while sending acknowledge config to hub: " + string(payload))
}
}
}
time.Sleep(1000 * time.Millisecond) // Sleep to avoid flooding the MQTT broker with messages
}
} else {
@@ -749,6 +804,12 @@ func HandleLiveStreamHD(livestreamCursor *packets.QueueCursor, configuration *mo
streams, _ := rtspClient.GetStreams()
videoTrack := webrtc.NewVideoTrack(streams)
audioTrack := webrtc.NewAudioTrack(streams)
if videoTrack == nil && audioTrack == nil {
log.Log.Error("cloud.HandleLiveStreamHD(): failed to create both video and audio tracks")
return
}
go webrtc.WriteToTrack(livestreamCursor, configuration, communication, mqttClient, videoTrack, audioTrack, rtspClient)
if config.Capture.ForwardWebRTC == "true" {
@@ -810,7 +871,8 @@ func HandleRealtimeProcessing(processingCursor *packets.QueueCursor, configurati
log.Log.Info("cloud.RealtimeProcessing(): Sending base64 encoded images to MQTT.")
img, err := rtspClient.DecodePacket(pkt)
if err == nil {
bytes, _ := utils.ImageToBytes(&img)
imageResized, _ := utils.ResizeImage(&img, uint(config.Capture.IPCamera.BaseWidth), uint(config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
encoded := base64.StdEncoding.EncodeToString(bytes)
valueMap := make(map[string]interface{})

View File

@@ -3,14 +3,20 @@ package cloud
import (
"crypto/tls"
"errors"
"io/ioutil"
"io"
"net/http"
"os"
"time"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
)
// We will count the number of retries we have done.
// If we have done more than "kstorageRetryPolicy" retries, we will stop, and start sending to the secondary storage.
var kstorageRetryCount = 0
var kstorageRetryTimeout = time.Now().Unix()
func UploadKerberosVault(configuration *models.Configuration, fileName string) (bool, bool, error) {
config := configuration.Config
@@ -19,7 +25,7 @@ func UploadKerberosVault(configuration *models.Configuration, fileName string) (
config.KStorage.SecretAccessKey == "" ||
config.KStorage.Directory == "" ||
config.KStorage.URI == "" {
err := "UploadKerberosVault: Kerberos Vault not properly configured."
err := "UploadKerberosVault: Kerberos Vault not properly configured"
log.Log.Info(err)
return false, false, errors.New(err)
}
@@ -42,7 +48,7 @@ func UploadKerberosVault(configuration *models.Configuration, fileName string) (
defer file.Close()
}
if err != nil {
err := "UploadKerberosVault: Upload Failed, file doesn't exists anymore."
err := "UploadKerberosVault: Upload Failed, file doesn't exists anymore"
log.Log.Info(err)
return false, false, errors.New(err)
}
@@ -52,76 +58,95 @@ func UploadKerberosVault(configuration *models.Configuration, fileName string) (
publicKey = config.HubKey
}
req, err := http.NewRequest("POST", config.KStorage.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault: error reading request, " + config.KStorage.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
req.Header.Set("Content-Type", "video/mp4")
req.Header.Set("X-Kerberos-Storage-CloudKey", publicKey)
req.Header.Set("X-Kerberos-Storage-AccessKey", config.KStorage.AccessKey)
req.Header.Set("X-Kerberos-Storage-SecretAccessKey", config.KStorage.SecretAccessKey)
req.Header.Set("X-Kerberos-Storage-Provider", config.KStorage.Provider)
req.Header.Set("X-Kerberos-Storage-FileName", fileName)
req.Header.Set("X-Kerberos-Storage-Device", config.Key)
req.Header.Set("X-Kerberos-Storage-Capture", "IPCamera")
req.Header.Set("X-Kerberos-Storage-Directory", config.KStorage.Directory)
// We need to check if we are in a retry timeout.
if kstorageRetryTimeout <= time.Now().Unix() {
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
req, err := http.NewRequest("POST", config.KStorage.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault: error reading request, " + config.KStorage.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
req.Header.Set("Content-Type", "video/mp4")
req.Header.Set("X-Kerberos-Storage-CloudKey", publicKey)
req.Header.Set("X-Kerberos-Storage-AccessKey", config.KStorage.AccessKey)
req.Header.Set("X-Kerberos-Storage-SecretAccessKey", config.KStorage.SecretAccessKey)
req.Header.Set("X-Kerberos-Storage-Provider", config.KStorage.Provider)
req.Header.Set("X-Kerberos-Storage-FileName", fileName)
req.Header.Set("X-Kerberos-Storage-Device", config.Key)
req.Header.Set("X-Kerberos-Storage-Capture", "IPCamera")
req.Header.Set("X-Kerberos-Storage-Directory", config.KStorage.Directory)
resp, err := client.Do(req)
if resp != nil {
defer resp.Body.Close()
}
var client *http.Client
if os.Getenv("AGENT_TLS_INSECURE") == "true" {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
client = &http.Client{Transport: tr}
} else {
client = &http.Client{}
}
if err == nil {
resp, err := client.Do(req)
if resp != nil {
body, err := ioutil.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
log.Log.Info("UploadKerberosVault: Upload Finished, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
log.Log.Info("UploadKerberosVault: Upload Failed, " + resp.Status + ", " + string(body))
defer resp.Body.Close()
}
if err == nil {
if resp != nil {
body, err := io.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
kstorageRetryCount = 0
log.Log.Info("UploadKerberosVault: Upload Finished, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
// We increase the retry count, and set the timeout.
// If we have reached the retry policy, we set the timeout.
// This means we will not retry for the next 5 minutes.
if kstorageRetryCount < config.KStorage.MaxRetries {
kstorageRetryCount = (kstorageRetryCount + 1)
}
if kstorageRetryCount == config.KStorage.MaxRetries {
kstorageRetryTimeout = time.Now().Add(time.Duration(config.KStorage.Timeout) * time.Second).Unix()
}
log.Log.Info("UploadKerberosVault: Upload Failed, " + resp.Status + ", " + string(body))
}
}
}
} else {
log.Log.Info("UploadKerberosVault: Upload Failed, " + err.Error())
}
} else {
log.Log.Info("UploadKerberosVault: Upload Failed, " + err.Error())
}
// We might need to check if we can upload to our secondary storage.
if config.KStorageSecondary.AccessKey == "" ||
config.KStorageSecondary.SecretAccessKey == "" ||
config.KStorageSecondary.Directory == "" ||
config.KStorageSecondary.URI == "" {
log.Log.Info("UploadKerberosVault: Secondary Kerberos Vault not properly configured.")
log.Log.Info("UploadKerberosVault (Secondary): Secondary Kerberos Vault not properly configured.")
} else {
log.Log.Info("UploadKerberosVault: Uploading to Secondary Kerberos Vault (" + config.KStorageSecondary.URI + ")")
if kstorageRetryCount < config.KStorage.MaxRetries {
log.Log.Info("UploadKerberosVault (Secondary): Do not upload to secondary storage, we are still in retry policy.")
return false, true, nil
}
log.Log.Info("UploadKerberosVault (Secondary): Uploading to Secondary Kerberos Vault (" + config.KStorageSecondary.URI + ")")
file, err = os.OpenFile(fullname, os.O_RDWR, 0755)
if file != nil {
defer file.Close()
}
if err != nil {
err := "UploadKerberosVault: Upload Failed, file doesn't exists anymore."
err := "UploadKerberosVault (Secondary): Upload Failed, file doesn't exists anymore"
log.Log.Info(err)
return false, false, errors.New(err)
}
req, err := http.NewRequest("POST", config.KStorageSecondary.URI+"/storage", file)
if err != nil {
errorMessage := "UploadKerberosVault: error reading request, " + config.KStorageSecondary.URI + "/storage: " + err.Error()
errorMessage := "UploadKerberosVault (Secondary): error reading request, " + config.KStorageSecondary.URI + "/storage: " + err.Error()
log.Log.Error(errorMessage)
return false, true, errors.New(errorMessage)
}
@@ -152,13 +177,13 @@ func UploadKerberosVault(configuration *models.Configuration, fileName string) (
if err == nil {
if resp != nil {
body, err := ioutil.ReadAll(resp.Body)
body, err := io.ReadAll(resp.Body)
if err == nil {
if resp.StatusCode == 200 {
log.Log.Info("UploadKerberosVault: Upload Finished to secondary, " + resp.Status + ", " + string(body))
log.Log.Info("UploadKerberosVault (Secondary): Upload Finished to secondary, " + resp.Status + ", " + string(body))
return true, true, nil
} else {
log.Log.Info("UploadKerberosVault: Upload Failed to secondary, " + resp.Status + ", " + string(body))
log.Log.Info("UploadKerberosVault (Secondary): Upload Failed to secondary, " + resp.Status + ", " + string(body))
}
}
}

View File

@@ -9,6 +9,7 @@ import (
mqtt "github.com/eclipse/paho.mqtt.golang"
"github.com/gin-gonic/gin"
"go.opentelemetry.io/otel"
"github.com/kerberos-io/agent/machinery/src/capture"
"github.com/kerberos-io/agent/machinery/src/cloud"
@@ -23,9 +24,15 @@ import (
"github.com/tevino/abool"
)
func Bootstrap(configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
var tracer = otel.Tracer("github.com/kerberos-io/agent/machinery/src/components")
func Bootstrap(ctx context.Context, configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
log.Log.Debug("components.Kerberos.Bootstrap(): bootstrapping the kerberos agent.")
bootstrapContext := context.Background()
_, span := tracer.Start(bootstrapContext, "Bootstrap")
// We will keep track of the Kerberos Agent up time
// This is send to Kerberos Hub in a heartbeat.
uptimeStart := time.Now()
@@ -78,6 +85,8 @@ func Bootstrap(configDirectory string, configuration *models.Configuration, comm
// Configure a MQTT client which helps for a bi-directional communication
mqttClient := routers.ConfigureMQTT(configDirectory, configuration, communication)
span.End()
// Run the agent and fire up all the other
// goroutines which do image capture, motion detection, onvif, etc.
for {
@@ -114,6 +123,9 @@ func Bootstrap(configDirectory string, configuration *models.Configuration, comm
func RunAgent(configDirectory string, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, uptimeStart time.Time, cameraSettings *models.Camera, captureDevice *capture.Capture) string {
ctx := context.Background()
ctxRunAgent, span := tracer.Start(ctx, "RunAgent")
log.Log.Info("components.Kerberos.RunAgent(): Creating camera and processing threads.")
config := configuration.Config
@@ -124,10 +136,10 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
rtspUrl := config.Capture.IPCamera.RTSP
rtspClient := captureDevice.SetMainClient(rtspUrl)
if rtspUrl != "" {
err := rtspClient.Connect(context.Background())
err := rtspClient.Connect(ctx, ctxRunAgent)
if err != nil {
log.Log.Error("components.Kerberos.RunAgent(): error connecting to RTSP stream: " + err.Error())
rtspClient.Close()
rtspClient.Close(ctxRunAgent)
rtspClient = nil
time.Sleep(time.Second * 3)
return status
@@ -145,7 +157,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
videoStreams, err := rtspClient.GetVideoStreams()
if err != nil || len(videoStreams) == 0 {
log.Log.Error("components.Kerberos.RunAgent(): no video stream found, might be the wrong codec (we only support H264 for the moment)")
rtspClient.Close()
rtspClient.Close(ctxRunAgent)
time.Sleep(time.Second * 3)
return status
}
@@ -161,6 +173,27 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
configuration.Config.Capture.IPCamera.Width = width
configuration.Config.Capture.IPCamera.Height = height
// Set the liveview width and height, this is used for the liveview and motion regions (drawing on the hub).
baseWidth := config.Capture.IPCamera.BaseWidth
baseHeight := config.Capture.IPCamera.BaseHeight
// If the liveview height is not set, we will calculate it based on the width and aspect ratio of the camera.
if baseWidth > 0 && baseHeight == 0 {
widthAspectRatio := float64(baseWidth) / float64(width)
configuration.Config.Capture.IPCamera.BaseHeight = int(float64(height) * widthAspectRatio)
} else if baseHeight > 0 && baseWidth > 0 {
configuration.Config.Capture.IPCamera.BaseHeight = baseHeight
configuration.Config.Capture.IPCamera.BaseWidth = baseWidth
} else {
configuration.Config.Capture.IPCamera.BaseHeight = height
configuration.Config.Capture.IPCamera.BaseWidth = width
}
// Set the SPS and PPS values in the configuration.
configuration.Config.Capture.IPCamera.SPSNALUs = [][]byte{videoStream.SPS}
configuration.Config.Capture.IPCamera.PPSNALUs = [][]byte{videoStream.PPS}
configuration.Config.Capture.IPCamera.VPSNALUs = [][]byte{videoStream.VPS}
// Define queues for the main and sub stream.
var queue *packets.Queue
var subQueue *packets.Queue
@@ -182,7 +215,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
rtspSubClient := captureDevice.SetSubClient(subRtspUrl)
captureDevice.RTSPSubClient = rtspSubClient
err := rtspSubClient.Connect(context.Background())
err := rtspSubClient.Connect(ctx, ctxRunAgent)
if err != nil {
log.Log.Error("components.Kerberos.RunAgent(): error connecting to RTSP sub stream: " + err.Error())
time.Sleep(time.Second * 3)
@@ -194,7 +227,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
videoSubStreams, err = rtspSubClient.GetVideoStreams()
if err != nil || len(videoSubStreams) == 0 {
log.Log.Error("components.Kerberos.RunAgent(): no video sub stream found, might be the wrong codec (we only support H264 for the moment)")
rtspSubClient.Close()
rtspSubClient.Close(ctxRunAgent)
time.Sleep(time.Second * 3)
return status
}
@@ -208,6 +241,22 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
// Set config values as well
configuration.Config.Capture.IPCamera.SubWidth = width
configuration.Config.Capture.IPCamera.SubHeight = height
// If we have a substream, we need to set the width and height of the substream. (so we will override above information)
// Set the liveview width and height, this is used for the liveview and motion regions (drawing on the hub).
baseWidth := config.Capture.IPCamera.BaseWidth
baseHeight := config.Capture.IPCamera.BaseHeight
// If the liveview height is not set, we will calculate it based on the width and aspect ratio of the camera.
if baseWidth > 0 && baseHeight == 0 {
widthAspectRatio := float64(baseWidth) / float64(width)
configuration.Config.Capture.IPCamera.BaseHeight = int(float64(height) * widthAspectRatio)
} else if baseHeight > 0 && baseWidth > 0 {
configuration.Config.Capture.IPCamera.BaseHeight = baseHeight
configuration.Config.Capture.IPCamera.BaseWidth = baseWidth
} else {
configuration.Config.Capture.IPCamera.BaseHeight = height
configuration.Config.Capture.IPCamera.BaseWidth = width
}
}
// We are creating a queue to store the RTSP frames in, these frames will be
@@ -217,28 +266,28 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
// Set the maximum GOP count, this is used to determine the pre-recording time.
log.Log.Info("components.Kerberos.RunAgent(): SetMaxGopCount was set with: " + strconv.Itoa(int(config.Capture.PreRecording)+1))
queue.SetMaxGopCount(int(config.Capture.PreRecording) + 1) // GOP time frame is set to prerecording (we'll add 2 gops to leave some room).
queue.SetMaxGopCount(1) // We will adjust this later on, when we have the GOP size.
queue.WriteHeader(videoStreams)
go rtspClient.Start(context.Background(), "main", queue, configuration, communication)
go rtspClient.Start(ctx, "main", queue, configuration, communication)
// Main stream is connected and ready to go.
communication.MainStreamConnected = true
// Try to create backchannel
rtspBackChannelClient := captureDevice.SetBackChannelClient(rtspUrl)
err = rtspBackChannelClient.ConnectBackChannel(context.Background())
err = rtspBackChannelClient.ConnectBackChannel(ctx, ctxRunAgent)
if err == nil {
log.Log.Info("components.Kerberos.RunAgent(): opened RTSP backchannel stream: " + rtspUrl)
go rtspBackChannelClient.StartBackChannel(context.Background())
go rtspBackChannelClient.StartBackChannel(ctx, ctxRunAgent)
}
rtspSubClient := captureDevice.RTSPSubClient
if subStreamEnabled && rtspSubClient != nil {
subQueue = packets.NewQueue()
communication.SubQueue = subQueue
subQueue.SetMaxGopCount(3) // GOP time frame is set to prerecording (we'll add 2 gops to leave some room).
subQueue.SetMaxGopCount(1) // GOP time frame is set to 1 for motion detection and livestreaming.
subQueue.WriteHeader(videoSubStreams)
go rtspSubClient.Start(context.Background(), "sub", subQueue, configuration, communication)
go rtspSubClient.Start(ctx, "sub", subQueue, configuration, communication)
// Sub stream is connected and ready to go.
communication.SubStreamConnected = true
@@ -301,6 +350,9 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
// If we reach this point, we have a working RTSP connection.
communication.CameraConnected = true
// Otel end span
span.End()
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// This will go into a blocking state, once this channel is triggered
// the agent will cleanup and restart.
@@ -344,7 +396,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
time.Sleep(time.Second * 3)
err = rtspClient.Close()
err = rtspClient.Close(ctxRunAgent)
if err != nil {
log.Log.Error("components.Kerberos.RunAgent(): error closing RTSP stream: " + err.Error())
time.Sleep(time.Second * 3)
@@ -356,7 +408,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
communication.Queue = nil
if subStreamEnabled {
err = rtspSubClient.Close()
err = rtspSubClient.Close(ctxRunAgent)
if err != nil {
log.Log.Error("components.Kerberos.RunAgent(): error closing RTSP sub stream: " + err.Error())
time.Sleep(time.Second * 3)
@@ -367,7 +419,7 @@ func RunAgent(configDirectory string, configuration *models.Configuration, commu
communication.SubQueue = nil
}
err = rtspBackChannelClient.Close()
err = rtspBackChannelClient.Close(ctxRunAgent)
if err != nil {
log.Log.Error("components.Kerberos.RunAgent(): error closing RTSP backchannel stream: " + err.Error())
}
@@ -655,7 +707,7 @@ func MakeRecording(c *gin.Context, communication *models.Communication) {
// @Success 200
func GetSnapshotBase64(c *gin.Context, captureDevice *capture.Capture, configuration *models.Configuration, communication *models.Communication) {
// We'll try to get a snapshot from the camera.
base64Image := capture.Base64Image(captureDevice, communication)
base64Image := capture.Base64Image(captureDevice, communication, configuration)
if base64Image != "" {
communication.Image = base64Image
}
@@ -677,7 +729,8 @@ func GetSnapshotRaw(c *gin.Context, captureDevice *capture.Capture, configuratio
image := capture.JpegImage(captureDevice, communication)
// encode image to jpeg
bytes, _ := utils.ImageToBytes(&image)
imageResized, _ := utils.ResizeImage(&image, uint(configuration.Config.Capture.IPCamera.BaseWidth), uint(configuration.Config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
// Return image/jpeg
c.Data(200, "image/jpeg", bytes)
@@ -692,7 +745,7 @@ func GetSnapshotRaw(c *gin.Context, captureDevice *capture.Capture, configuratio
// @Success 200
func GetConfig(c *gin.Context, captureDevice *capture.Capture, configuration *models.Configuration, communication *models.Communication) {
// We'll try to get a snapshot from the camera.
base64Image := capture.Base64Image(captureDevice, communication)
base64Image := capture.Base64Image(captureDevice, communication, configuration)
if base64Image != "" {
communication.Image = base64Image
}

View File

@@ -21,6 +21,7 @@ func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Conf
var isPixelChangeThresholdReached = false
var changesToReturn = 0
var motionRectangle models.MotionRectangle
pixelThreshold := config.Capture.PixelChangeThreshold
// Might not be set in the config file, so set it to 150
@@ -62,16 +63,34 @@ func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Conf
}
}
// A user might have set the base width and height for the IPCamera.
// This means also the polygon coordinates are set to a specific width and height (which might be different than the actual packets
// received from the IPCamera). So we will resize the polygon coordinates to the base width and height.
baseWidthRatio := 1.0
baseHeightRatio := 1.0
baseWidth := config.Capture.IPCamera.BaseWidth
baseHeight := config.Capture.IPCamera.BaseHeight
if baseWidth > 0 && baseHeight > 0 {
// We'll get the first image to calculate the ratio
img := imageArray[0]
if img != nil {
bounds := img.Bounds()
rows := bounds.Dy()
cols := bounds.Dx()
baseWidthRatio = float64(cols) / float64(baseWidth)
baseHeightRatio = float64(rows) / float64(baseHeight)
}
}
// Calculate mask
var polyObjects []geo.Polygon
if config.Region != nil {
for _, polygon := range config.Region.Polygon {
coords := polygon.Coordinates
poly := geo.Polygon{}
for _, c := range coords {
x := c.X
y := c.Y
x := c.X * baseWidthRatio
y := c.Y * baseHeightRatio
p := geo.NewPoint(x, y)
if !poly.Contains(p) {
poly.Add(p)
@@ -132,7 +151,7 @@ func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Conf
if detectMotion {
// Remember additional information about the result of findmotion
isPixelChangeThresholdReached, changesToReturn = FindMotion(imageArray, coordinatesToCheck, pixelThreshold)
isPixelChangeThresholdReached, changesToReturn, motionRectangle = FindMotion(imageArray, coordinatesToCheck, pixelThreshold)
if isPixelChangeThresholdReached {
// If offline mode is disabled, send a message to the hub
@@ -164,6 +183,7 @@ func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Conf
dataToPass := models.MotionDataPartial{
Timestamp: time.Now().Unix(),
NumberOfChanges: changesToReturn,
Rectangle: motionRectangle,
}
communication.HandleMotion <- dataToPass //Save data to the channel
}
@@ -185,24 +205,58 @@ func ProcessMotion(motionCursor *packets.QueueCursor, configuration *models.Conf
log.Log.Debug("computervision.main.ProcessMotion(): stop the motion detection.")
}
func FindMotion(imageArray [3]*image.Gray, coordinatesToCheck []int, pixelChangeThreshold int) (thresholdReached bool, changesDetected int) {
func FindMotion(imageArray [3]*image.Gray, coordinatesToCheck []int, pixelChangeThreshold int) (thresholdReached bool, changesDetected int, motionRectangle models.MotionRectangle) {
image1 := imageArray[0]
image2 := imageArray[1]
image3 := imageArray[2]
threshold := 60
changes := AbsDiffBitwiseAndThreshold(image1, image2, image3, threshold, coordinatesToCheck)
return changes > pixelChangeThreshold, changes
changes, motionRectangle := AbsDiffBitwiseAndThreshold(image1, image2, image3, threshold, coordinatesToCheck)
return changes > pixelChangeThreshold, changes, motionRectangle
}
func AbsDiffBitwiseAndThreshold(img1 *image.Gray, img2 *image.Gray, img3 *image.Gray, threshold int, coordinatesToCheck []int) int {
func AbsDiffBitwiseAndThreshold(img1 *image.Gray, img2 *image.Gray, img3 *image.Gray, threshold int, coordinatesToCheck []int) (int, models.MotionRectangle) {
changes := 0
var pixelList [][]int
for i := 0; i < len(coordinatesToCheck); i++ {
pixel := coordinatesToCheck[i]
diff := int(img3.Pix[pixel]) - int(img1.Pix[pixel])
diff2 := int(img3.Pix[pixel]) - int(img2.Pix[pixel])
if (diff > threshold || diff < -threshold) && (diff2 > threshold || diff2 < -threshold) {
changes++
// Store the pixel coordinates where the change is detected
pixelList = append(pixelList, []int{pixel % img1.Bounds().Dx(), pixel / img1.Bounds().Dx()})
}
}
return changes
// Calculate rectangle of pixelList (startX, startY, endX, endY)
var motionRectangle models.MotionRectangle
if len(pixelList) > 0 {
startX := pixelList[0][0]
startY := pixelList[0][1]
endX := startX
endY := startY
for _, pixel := range pixelList {
if pixel[0] < startX {
startX = pixel[0]
}
if pixel[1] < startY {
startY = pixel[1]
}
if pixel[0] > endX {
endX = pixel[0]
}
if pixel[1] > endY {
endY = pixel[1]
}
}
log.Log.Debugf("Rectangle of changes detected: startX: %d, startY: %d, endX: %d, endY: %d", startX, startY, endX, endY)
motionRectangle = models.MotionRectangle{
X: startX,
Y: startY,
Width: endX - startX,
Height: endY - startY,
}
log.Log.Debugf("Motion rectangle: %+v", motionRectangle)
}
return changes, motionRectangle
}

View File

@@ -239,7 +239,15 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.Capture.IPCamera.SubRTSP = value
break
/* ONVIF connnection settings */
/* Base width and height for the liveview and motion regions */
case "AGENT_CAPTURE_IPCAMERA_BASE_WIDTH":
configuration.Config.Capture.IPCamera.BaseWidth, _ = strconv.Atoi(value)
break
case "AGENT_CAPTURE_IPCAMERA_BASE_HEIGHT":
configuration.Config.Capture.IPCamera.BaseHeight, _ = strconv.Atoi(value)
break
/* ONVIF connnection settings */
case "AGENT_CAPTURE_IPCAMERA_ONVIF":
configuration.Config.Capture.IPCamera.ONVIF = value
break
@@ -392,6 +400,11 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.MQTTPassword = value
break
/* MQTT chunking of low-resolution images into multiple messages */
case "AGENT_CAPTURE_LIVEVIEW_CHUNKING":
configuration.Config.Capture.LiveviewChunking = value
break
/* Real-time streaming of keyframes to a MQTT topic */
case "AGENT_REALTIME_PROCESSING":
configuration.Config.RealtimeProcessing = value
@@ -463,6 +476,20 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
configuration.Config.KStorage.Directory = value
break
/* Retry policy and timeout */
case "AGENT_KERBEROSVAULT_MAX_RETRIES":
maxRetries, err := strconv.Atoi(value)
if err == nil {
configuration.Config.KStorage.MaxRetries = maxRetries
}
break
case "AGENT_KERBEROSVAULT_TIMEOUT":
timeout, err := strconv.Atoi(value)
if err == nil {
configuration.Config.KStorage.Timeout = timeout
}
break
/* When storing in a secondary Vault */
case "AGENT_KERBEROSVAULT_SECONDARY_URI":
configuration.Config.KStorageSecondary.URI = value
@@ -505,9 +532,26 @@ func OverrideWithEnvironmentVariables(configuration *models.Configuration) {
case "AGENT_ENCRYPTION_SYMMETRIC_KEY":
configuration.Config.Encryption.SymmetricKey = value
break
/* When signing is enabled */
case "AGENT_SIGNING":
configuration.Config.Signing.Enabled = value
break
case "AGENT_SIGNING_PRIVATE_KEY":
signingPrivateKey := strings.ReplaceAll(value, "\\n", "\n")
configuration.Config.Signing.PrivateKey = signingPrivateKey
break
}
}
}
// Signing is a new feature, so if empty we set default values.
if configuration.Config.Signing == nil || configuration.Config.Signing.PrivateKey == "" {
configuration.Config.Signing = &models.Signing{
Enabled: "true",
PrivateKey: "-----BEGIN PRIVATE KEY-----\nMIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQDoSxjyw08lRxF4Yoqmcaewjq3XjB55dMy4tlN5MGLdr8aAPuNR9Mwh3jlh1bDpwQXNgZkHDV/q9bpdPGGi7SQo2xw+rDuo5Y1f3wdzz+iuCTPbzoGFalE+1PZlU5TEtUtlbt7MRc4pxTaLP3u0P3EtW3KnzcUarcJWZJYxzv7gqVNCA/47BN+1ptqjwz3LAlah5yaftEvVjkaANOsafUswbS4VT44XfSlbKgebORCKDuNgQiyhuV5gU+J0TOaqRWwwMAWV0UoScyJLfhHRBCrUwrCUTwqH9jfkB7pgRFsYoZJd4MKMeHJjFSum+QXCBqInSnwu8c2kJChiLMWqJ+mhpTdfUAmSkeUSStfbbcavIPbDABvMgzOcmYMIVXXe57twU0xdu3AqWLtc9kw1BkUgZblM9pSSpYrIDheEyMs2/hiLgXsIaM0nVQtqwrA7rbeEGuPblzA6hvHgwN9K6HaBqdlGSlpYZ0v3SWIMwmxRB+kIojlyuggm8Qa4mqL97GFDGl6gOBGlNUFTBUVEa3EaJ7NJpGobRGsh/9dXzcW4aYmT9WxlzTlIKksI1ro6KdRfuVWfEs4AnG8bVEJmofK8EUrueB9IdXlcJZB49xolnOZPFohtMe/0U7evQOQP3sZnX+KotCsE7OXJvL09oF58JKoqmK9lPp0+pFBU4g6NjQIDAQABAoICAA+RSWph1t+q5R3nxUxFTYMrhv5IjQe2mDxJpF3B409zolC9OHxgGUisobTY3pBqs0DtKbxUeH2A0ehUH/axEosWHcz3cmIbgxHE9kdlJ9B3Lmss6j/uw+PWutu1sgm5phaIFIvuNNRWhPB6yXUwU4sLRat1+Z9vTmIQiKdtLIrtJz/n2VDvrJxn1N+yAsE20fnrksFKyZuxVsJaZPiX/t5Yv1/z0LjFjVoL7GUA5/Si7csN4ftqEhUrkNr2BvcZlTyffrF4lZCXrtl76RNUaxhqIu3H0gFbV2UfBpuckkfAhNRpXJ4iFSxm4nQbk4ojV8+l21RFOBeDN2Z7Ocu6auP5MnzpopR66vmDCmPoid498VGgDzFQEVkOar8WAa4v9h85QgLKrth6FunmaWJUT6OggQD3yY58GSwp5+ARMETMBP2x6Eld+PGgqoJvPT1+l/e9gOw7/SJ+Wz6hRXZAm/eiXMppHtB7sfea5rscNanPjJkK9NvPM0MX9cq/iA6QjXuETkMbubjo+Cxk3ydZiIQmWQDAx/OgxTyHbeRCVhLPcAphX0clykCuHZpI9Mvvj643/LoE0mjTByWJXf/WuGJA8ElHkjSdokVJ7jumz8OZZHfq0+V7+la2opsObeQANHW5MLWrnHlRVzTGV0IRZDXh7h1ptUJ4ubdvw/GJ2NeTAoIBAQD0lXXdjYKWC4uZ4YlgydP8b1CGda9cBV5RcPt7q9Ya1R2E4ieYyohmzltopvdaOXdsTZzhtdzOzKF+2qNcbBKhBTleYZ8GN5RKbo7HwXWpzfCTjseKHOD/QPwvBKXzLVWNtXn1NrLR79Rv0wbkYF6DtoqpEPf5kMs4bx79yW+mz8FUgdEeMjKphx6Jd5RYlTUxS64K6bnK7gjHNCF2cwdxsh4B6EB649GKeNz4JXi+oQBmOcX5ncXnkJrbju+IjtCkQ40HINVNdX7XeEaaw6KGaImVjw61toPUuDaioYUojufayoyXaUJnDbHQ2tNekEpq5iwnenZCbUKWmSeRe7dLAoIBAQDzIscYujsrmPxiTj2prhG0v36NRNP99mShnnJGowiIs+UBS0EMdOmBFa2sC9uFs/VnreQNYPDJdfr7O5VK9kfbH/PSiiKJ+wVebfdAlWkJYH27JN2Kl2l/OsvRVelNvF3BWIYF46qzGxIM0axaz3T2ZAJ9SrUgeAYhak6uyM4fbexEWXxDgPGu6C0jB6IAzmHJnnh+j5+4ZXqjVyUxBYtUsWXF/TXomVcT9jxj7aUmS2/Us0XTVOVNpALqqYcekrzsX/wX0OEi5HkivYXHcNaDHx3NuUf6KdYof5DwPUM76qe+5/kWlSIHP3M6rIFK3pYFUnkHn2E8jNWcO97Aio+HAoIBAA+bcff/TbPxbKkXIUMR3fsfx02tONFwbkJYKVQM9Q6lRsrx+4Dee7HDvUWCUgpp3FsG4NnuVvbDTBLiNMZzBwVLZgvFwvYMmePeBjJs/+sj/xQLamQ/z4O6S91cOJK589mlGPEy2lpXKYExQCFWnPFetp5vPMOqH62sOZgMQJmubDHOTt/UaDM1Mhenj8nPS6OnpqV/oKF4awr7Ip+CW5k/unZ4sZSl8PsbF06mZXwUngfn6+Av1y8dpSQZjONz6ZBx1w/7YmEc/EkXnbnGfhqBlTX7+P5TdTofvyzFjc+2vsjRYANRbjFRSGWBcTd5kaYcpfim8eDvQ+6EO2gnMt0CggEAH2ln1Y8B5AEQ4lZ/avOdP//ZhsDUrqPtnl/NHckkahzrwj4JumVEYbP+SxMBGoYEd4+kvgG/OhfvBBRPlm65G9tF8fZ8vdzbdba5UfO7rUV1GP+LS8OCErjy6imySaPDbR5Vul8Oh7NAor1YCidxUf/bvnovanF3QUvtvHEfCDp4YuA4yLPZBaLjaforePUw9w5tPNSravRZYs74dBvmQ1vj7S9ojpN5B5AxfyuNwaPPX+iFZec69MvywISEe3Ozysof1Kfc3lgsOkvIA9tVK32SqSh93xkWnQbWH+OaUxxe7bAko0FDMzKEXZk53wVg1nEwR8bUljEPy+6EOdXs8wKCAQEAsEOWYMY5m7HkeG2XTTvX7ECmmdGl/c4ZDVwzB4IPxqUG7XfLmtsON8YoKOEUpJoc4ANafLXzmU+esUGbH4Ph22IWgP9jzws7jxaN/Zoku64qrSjgEZFTRIpKyhFk/ImWbS9laBW4l+m0tqTTRqoE0QEJf/2uv/04q65zrA70X9z2+KTrAtqOiRQPWl/IxRe9U4OEeGL+oD+YlXKCDsnJ3rwUIOZgJx0HWZg7K35DKwqs1nVi56FBdljiTRKAjVLRedjgDCSfGS1yUZ3krHzpaPt1qgnT3rdtYcIdbYDr66V2/gEEaz6XMGHuTk/ewjzUJxq9UTVeXOCbkRPXgVJg1w==\n-----END PRIVATE KEY-----",
}
}
}
func SaveConfig(configDirectory string, config models.Config, configuration *models.Configuration, communication *models.Communication) error {
@@ -547,6 +591,10 @@ func StoreConfig(configDirectory string, config models.Config) error {
config.Encryption.PrivateKey = encryptionPrivateKey
}
// Reset the basewidth and baseheight
config.Capture.IPCamera.BaseWidth = 0
config.Capture.IPCamera.BaseHeight = 0
// Save into database
if os.Getenv("DEPLOYMENT") == "factory" || os.Getenv("MACHINERY_ENVIRONMENT") == "kubernetes" {
// Write to mongodb

View File

@@ -118,6 +118,16 @@ func (self *Logging) Info(sentence string) {
}
}
func (self *Logging) Infof(format string, args ...interface{}) {
switch self.Logger {
case "go-logging":
gologging.Infof(format, args...)
case "logrus":
logrus.Infof(format, args...)
default:
}
}
func (self *Logging) Warning(sentence string) {
switch self.Logger {
case "go-logging":
@@ -138,6 +148,16 @@ func (self *Logging) Debug(sentence string) {
}
}
func (self *Logging) Debugf(format string, args ...interface{}) {
switch self.Logger {
case "go-logging":
gologging.Debugf(format, args...)
case "logrus":
logrus.Debugf(format, args...)
default:
}
}
func (self *Logging) Error(sentence string) {
switch self.Logger {
case "go-logging":

View File

@@ -46,6 +46,7 @@ type Config struct {
HubSite string `json:"hub_site" bson:"hub_site"`
ConditionURI string `json:"condition_uri" bson:"condition_uri"`
Encryption *Encryption `json:"encryption,omitempty" bson:"encryption,omitempty"`
Signing *Signing `json:"signing,omitempty" bson:"signing,omitempty"`
RealtimeProcessing string `json:"realtimeprocessing,omitempty" bson:"realtimeprocessing,omitempty"`
RealtimeProcessingTopic string `json:"realtimeprocessing_topic" bson:"realtimeprocessing_topic"`
}
@@ -61,9 +62,11 @@ type Capture struct {
Snapshots string `json:"snapshots,omitempty"`
Motion string `json:"motion,omitempty"`
Liveview string `json:"liveview,omitempty"`
LiveviewChunking string `json:"liveview_chunking,omitempty" bson:"liveview_chunking,omitempty"`
Continuous string `json:"continuous,omitempty"`
PostRecording int64 `json:"postrecording"`
PreRecording int64 `json:"prerecording"`
GopSize int `json:"gopsize,omitempty" bson:"gopsize,omitempty"` // GOP size in seconds, used for pre-recording
MaxLengthRecording int64 `json:"maxlengthrecording"`
TranscodingWebRTC string `json:"transcodingwebrtc"`
TranscodingResolution int64 `json:"transcodingresolution"`
@@ -76,18 +79,28 @@ type Capture struct {
// IPCamera configuration, such as the RTSP url of the IPCamera and the FPS.
// Also includes ONVIF integration
type IPCamera struct {
RTSP string `json:"rtsp"`
Width int `json:"width"`
Height int `json:"height"`
FPS string `json:"fps"`
SubRTSP string `json:"sub_rtsp"`
SubWidth int `json:"sub_width"`
SubHeight int `json:"sub_height"`
SubFPS string `json:"sub_fps"`
ONVIF string `json:"onvif,omitempty" bson:"onvif"`
ONVIFXAddr string `json:"onvif_xaddr" bson:"onvif_xaddr"`
ONVIFUsername string `json:"onvif_username" bson:"onvif_username"`
ONVIFPassword string `json:"onvif_password" bson:"onvif_password"`
RTSP string `json:"rtsp"`
Width int `json:"width"`
Height int `json:"height"`
FPS string `json:"fps"`
SubRTSP string `json:"sub_rtsp"`
SubWidth int `json:"sub_width"`
SubHeight int `json:"sub_height"`
BaseWidth int `json:"base_width"`
BaseHeight int `json:"base_height"`
SubFPS string `json:"sub_fps"`
ONVIF string `json:"onvif,omitempty" bson:"onvif"`
ONVIFXAddr string `json:"onvif_xaddr" bson:"onvif_xaddr"`
ONVIFUsername string `json:"onvif_username" bson:"onvif_username"`
ONVIFPassword string `json:"onvif_password" bson:"onvif_password"`
SPSNALUs [][]byte `json:"sps_nalus,omitempty" bson:"sps_nalus,omitempty"`
PPSNALUs [][]byte `json:"pps_nalus,omitempty" bson:"pps_nalus,omitempty"`
VPSNALUs [][]byte `json:"vps_nalus,omitempty" bson:"vps_nalus,omitempty"`
SampleRate int `json:"sample_rate,omitempty" bson:"sample_rate,omitempty"`
Channels int `json:"channels,omitempty" bson:"channels,omitempty"`
}
// USBCamera configuration, such as the device path (/dev/video*)
@@ -159,6 +172,8 @@ type KStorage struct {
SecretAccessKey string `json:"secret_access_key,omitempty" bson:"secret_access_key,omitempty"`
Provider string `json:"provider,omitempty" bson:"provider,omitempty"`
Directory string `json:"directory,omitempty" bson:"directory,omitempty"`
MaxRetries int `json:"max_retries,omitempty" bson:"max_retries,omitempty"`
Timeout int `json:"timeout,omitempty" bson:"timeout,omitempty"`
}
// Dropbox integration
@@ -175,3 +190,9 @@ type Encryption struct {
PrivateKey string `json:"private_key" bson:"private_key"`
SymmetricKey string `json:"symmetric_key" bson:"symmetric_key"`
}
// Signing
type Signing struct {
Enabled string `json:"enabled" bson:"enabled"`
PrivateKey string `json:"private_key" bson:"private_key"`
}

View File

@@ -132,6 +132,7 @@ type Message struct {
// The payload structure which is used to send over
// and receive messages from the MQTT broker
type Payload struct {
Version string `json:"version"` // Version of the message, e.g. "1.0"
Action string `json:"action"`
DeviceId string `json:"device_id"`
Signature string `json:"signature"`

View File

@@ -1,8 +1,9 @@
package models
type MotionDataPartial struct {
Timestamp int64 `json:"timestamp" bson:"timestamp"`
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Timestamp int64 `json:"timestamp" bson:"timestamp"`
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Rectangle MotionRectangle `json:"rectangle" bson:"rectangle"`
}
type MotionDataFull struct {
@@ -14,3 +15,10 @@ type MotionDataFull struct {
NumberOfChanges int `json:"numberOfChanges" bson:"numberOfChanges"`
Token int `json:"token" bson:"token"`
}
type MotionRectangle struct {
X int `json:"x" bson:"x"`
Y int `json:"y" bson:"y"`
Width int `json:"width" bson:"width"`
Height int `json:"height" bson:"height"`
}

View File

@@ -17,5 +17,7 @@ type Packet struct {
CompositionTime int64 // packet presentation time minus decode time for H264 B-Frame
Time int64 // packet decode time
TimeLegacy time.Duration
CurrentTime int64 // current time in milliseconds (UNIX timestamp)
Data []byte // packet data
Gopsize int // size of the GOP
}

View File

@@ -45,6 +45,11 @@ func (self *Queue) SetMaxGopCount(n int) {
return
}
func (self *Queue) GetMaxGopCount() int {
n := self.maxgopcount
return n
}
func (self *Queue) WriteHeader(streams []Stream) error {
self.lock.Lock()

View File

@@ -1,6 +1,9 @@
package packets
type Stream struct {
// The ID of the stream.
Index int `json:"index" bson:"index"`
// The name of the stream.
Name string
@@ -39,4 +42,13 @@ type Stream struct {
// IsBackChannel is true if this stream is a back channel.
IsBackChannel bool
// SampleRate is the sample rate of the audio stream.
SampleRate int
// Channels is the number of audio channels.
Channels int
// GopSize is the size of the GOP (Group of Pictures).
GopSize int
}

View File

@@ -15,7 +15,7 @@ import (
func AddRoutes(r *gin.Engine, authMiddleware *jwt.GinJWTMiddleware, configDirectory string, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) *gin.RouterGroup {
r.GET("/ws", func(c *gin.Context) {
websocket.WebsocketHandler(c, communication, captureDevice)
websocket.WebsocketHandler(c, configuration, communication, captureDevice)
})
// This is legacy should be removed in future! Now everything

View File

@@ -123,7 +123,6 @@ func ConfigureMQTT(configDirectory string, configuration *models.Configuration,
opts.SetClientID(mqttClientID)
log.Log.Info("routers.mqtt.main.ConfigureMQTT(): Set ClientID " + mqttClientID)
rand.Seed(time.Now().UnixNano())
webrtc.CandidateArrays = make(map[string](chan string))
opts.OnConnect = func(c mqtt.Client) {
// We managed to connect to the MQTT broker, hurray!
@@ -389,14 +388,6 @@ func HandleRequestConfig(mqttClient mqtt.Client, hubKey string, payload models.P
// Copy the config, as we don't want to share the encryption part.
deepCopy := configuration.Config
// We need a fix for the width and height if a substream.
// The ROI requires the width and height of the sub stream.
if configuration.Config.Capture.IPCamera.SubRTSP != "" &&
configuration.Config.Capture.IPCamera.SubRTSP != configuration.Config.Capture.IPCamera.RTSP {
deepCopy.Capture.IPCamera.Width = configuration.Config.Capture.IPCamera.SubWidth
deepCopy.Capture.IPCamera.Height = configuration.Config.Capture.IPCamera.SubHeight
}
var configMap map[string]interface{}
inrec, _ := json.Marshal(deepCopy)
json.Unmarshal(inrec, &configMap)

View File

@@ -49,7 +49,7 @@ var upgrader = websocket.Upgrader{
},
}
func WebsocketHandler(c *gin.Context, communication *models.Communication, captureDevice *capture.Capture) {
func WebsocketHandler(c *gin.Context, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
w := c.Writer
r := c.Request
conn, err := upgrader.Upgrade(w, r, nil)
@@ -112,7 +112,7 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication, captu
ctx, cancel := context.WithCancel(context.Background())
sockets[clientID].Cancels["stream-sd"] = cancel
go ForwardSDStream(ctx, clientID, sockets[clientID], communication, captureDevice)
go ForwardSDStream(ctx, clientID, sockets[clientID], configuration, communication, captureDevice)
}
}
}
@@ -131,7 +131,7 @@ func WebsocketHandler(c *gin.Context, communication *models.Communication, captu
}
}
func ForwardSDStream(ctx context.Context, clientID string, connection *Connection, communication *models.Communication, captureDevice *capture.Capture) {
func ForwardSDStream(ctx context.Context, clientID string, connection *Connection, configuration *models.Configuration, communication *models.Communication, captureDevice *capture.Capture) {
var queue *packets.Queue
var cursor *packets.QueueCursor
@@ -159,7 +159,10 @@ logreader:
var img image.YCbCr
img, err = (*rtspClient).DecodePacket(pkt)
if err == nil {
bytes, _ := utils.ImageToBytes(&img)
config := configuration.Config
// Resize the image to the base width and height
imageResized, _ := utils.ResizeImage(&img, uint(config.Capture.IPCamera.BaseWidth), uint(config.Capture.IPCamera.BaseHeight))
bytes, _ := utils.ImageToBytes(imageResized)
encodedImage = base64.StdEncoding.EncodeToString(bytes)
} else {
continue

View File

@@ -21,9 +21,14 @@ import (
"github.com/kerberos-io/agent/machinery/src/encryption"
"github.com/kerberos-io/agent/machinery/src/log"
"github.com/kerberos-io/agent/machinery/src/models"
"github.com/nfnt/resize"
)
const VERSION = "3.3.5"
// VERSION is the agent version. It defaults to "0.0.0" for local dev builds
// and is overridden at build time via:
// go build -ldflags "-X github.com/kerberos-io/agent/machinery/src/utils.VERSION=v1.2.3"
var VERSION = "0.0.0"
const letterBytes = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
@@ -401,9 +406,31 @@ func Decrypt(directoryOrFile string, symmetricKey []byte) {
}
}
func ImageToBytes(img image.Image) ([]byte, error) {
func ImageToBytes(img *image.Image) ([]byte, error) {
buffer := new(bytes.Buffer)
w := bufio.NewWriter(buffer)
err := jpeg.Encode(w, img, &jpeg.Options{Quality: 15})
err := jpeg.Encode(w, *img, &jpeg.Options{Quality: 35})
log.Log.Debug("ImageToBytes() - buffer size: " + strconv.Itoa(buffer.Len()))
return buffer.Bytes(), err
}
func ResizeImage(img image.Image, newWidth uint, newHeight uint) (*image.Image, error) {
if img == nil {
return nil, errors.New("image is nil")
}
// resize to width 640 using Lanczos resampling
// and preserve aspect ratio
m := resize.Resize(newWidth, newHeight, img, resize.Lanczos3)
return &m, nil
}
func ResizeHeightWithAspectRatio(newWidth int, width int, height int) (int, int) {
if newWidth <= 0 || width <= 0 || height <= 0 {
return width, height
}
// Calculate the new height based on the aspect ratio
newHeight := (newWidth * height) / width
// Return the new dimensions
return newWidth, newHeight
}

1379
machinery/src/video/mp4.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,176 @@
package video
import (
"fmt"
"os"
"testing"
mp4ff "github.com/Eyevinn/mp4ff/mp4"
"github.com/kerberos-io/agent/machinery/src/models"
)
// TestMP4Duration creates an MP4 file simulating a 5-second video recording
// and verifies that the durations in all boxes match the sum of sample durations.
func TestMP4Duration(t *testing.T) {
tmpFile := "/tmp/test_duration.mp4"
defer os.Remove(tmpFile)
// Minimal SPS for H.264 (baseline, 640x480) - proper Annex B format with start code
sps := []byte{0x67, 0x42, 0xc0, 0x1e, 0xd9, 0x00, 0xa0, 0x47, 0xfe, 0xc8}
pps := []byte{0x68, 0xce, 0x38, 0x80}
mp4Video := NewMP4(tmpFile, [][]byte{sps}, [][]byte{pps}, nil, 10)
mp4Video.SetWidth(640)
mp4Video.SetHeight(480)
videoTrack := mp4Video.AddVideoTrack("H264")
// Simulate 5 seconds at 25fps (200 frames, keyframe every 50 frames = 2s)
// PTS in milliseconds (timescale=1000)
frameDuration := uint64(40) // 40ms per frame = 25fps
numFrames := 150
gopSize := 50
// Create a fake Annex B NAL unit (keyframe IDR = type 5, non-keyframe = type 1)
makeFrame := func(isKey bool) []byte {
nalType := byte(0x01) // non-IDR slice
if isKey {
nalType = 0x65 // IDR slice
}
// Start code (4 bytes) + NAL header + some data
frame := []byte{0x00, 0x00, 0x00, 0x01, nalType}
// Add some padding data
for i := 0; i < 100; i++ {
frame = append(frame, byte(i))
}
return frame
}
var expectedDuration uint64
for i := 0; i < numFrames; i++ {
pts := uint64(i) * frameDuration
isKeyframe := i%gopSize == 0
err := mp4Video.AddSampleToTrack(videoTrack, isKeyframe, makeFrame(isKeyframe), pts)
if err != nil {
t.Fatalf("AddSampleToTrack failed at frame %d: %v", i, err)
}
}
expectedDuration = uint64(numFrames) * frameDuration // Should be 6000ms (150 * 40)
// Close with config that has signing key to avoid nil panics
config := &models.Config{
Signing: &models.Signing{
PrivateKey: "",
},
}
mp4Video.Close(config)
// Log what the code computed
t.Logf("VideoTotalDuration: %d ms", mp4Video.VideoTotalDuration)
t.Logf("Expected duration: %d ms", expectedDuration)
t.Logf("Segments: %d", len(mp4Video.SegmentDurations))
var sumSegDur uint64
for i, d := range mp4Video.SegmentDurations {
t.Logf(" Segment %d: duration=%d ms", i, d)
sumSegDur += d
}
t.Logf("Sum of segment durations: %d ms", sumSegDur)
// Now read back the file and inspect the boxes
f, err := os.Open(tmpFile)
if err != nil {
t.Fatalf("Failed to open output file: %v", err)
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
t.Fatalf("Failed to stat output file: %v", err)
}
parsedFile, err := mp4ff.DecodeFile(f)
if err != nil {
t.Fatalf("Failed to decode MP4: %v", err)
}
t.Logf("File size: %d bytes", fi.Size())
// Check moov box
if parsedFile.Moov == nil {
t.Fatal("No moov box found")
}
// Check mvhd duration
mvhd := parsedFile.Moov.Mvhd
t.Logf("mvhd.Duration: %d (timescale=%d) = %.2f seconds", mvhd.Duration, mvhd.Timescale, float64(mvhd.Duration)/float64(mvhd.Timescale))
t.Logf("mvhd.Rate: 0x%08x", mvhd.Rate)
t.Logf("mvhd.Volume: 0x%04x", mvhd.Volume)
// Check each trak
for i, trak := range parsedFile.Moov.Traks {
t.Logf("Track %d:", i)
t.Logf(" tkhd.Duration: %d", trak.Tkhd.Duration)
t.Logf(" mdhd.Duration: %d (timescale=%d) = %.2f seconds", trak.Mdia.Mdhd.Duration, trak.Mdia.Mdhd.Timescale, float64(trak.Mdia.Mdhd.Duration)/float64(trak.Mdia.Mdhd.Timescale))
}
// Check mvex/mehd
if parsedFile.Moov.Mvex != nil && parsedFile.Moov.Mvex.Mehd != nil {
t.Logf("mehd.FragmentDuration: %d", parsedFile.Moov.Mvex.Mehd.FragmentDuration)
}
// Sum up actual sample durations from trun boxes in all segments
var actualTrunDuration uint64
var sampleCount int
for _, seg := range parsedFile.Segments {
for _, frag := range seg.Fragments {
for _, traf := range frag.Moof.Trafs {
// Only count video track (track 1)
if traf.Tfhd.TrackID == 1 {
for _, trun := range traf.Truns {
for _, s := range trun.Samples {
actualTrunDuration += uint64(s.Dur)
sampleCount++
}
}
}
}
}
}
t.Logf("Actual trun sample count: %d", sampleCount)
t.Logf("Actual trun total duration: %d ms", actualTrunDuration)
// Check sidx
if parsedFile.Sidx != nil {
var sidxDuration uint64
for _, ref := range parsedFile.Sidx.SidxRefs {
sidxDuration += uint64(ref.SubSegmentDuration)
}
t.Logf("sidx total duration: %d ms", sidxDuration)
}
// VERIFY: All duration values should be consistent
// The expected duration for 150 frames at 40ms each:
// - The sample-buffering pattern means the LAST sample uses LastVideoSampleDTS as duration
// - So all 150 samples should produce 150 * 40ms = 6000ms total
// But due to the pending sample pattern, the actual trun durations might differ
fmt.Println()
fmt.Println("=== DURATION CONSISTENCY CHECK ===")
fmt.Printf("Expected (150 * 40ms): %d ms\n", expectedDuration)
fmt.Printf("mvhd.Duration: %d ms\n", mvhd.Duration)
fmt.Printf("tkhd.Duration: %d ms\n", parsedFile.Moov.Traks[0].Tkhd.Duration)
fmt.Printf("mdhd.Duration: %d ms\n", parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration)
fmt.Printf("Actual trun durations sum: %d ms\n", actualTrunDuration)
fmt.Printf("VideoTotalDuration: %d ms\n", mp4Video.VideoTotalDuration)
fmt.Printf("Sum of SegmentDurations: %d ms\n", sumSegDur)
fmt.Println()
// The key assertion: header duration must equal trun sum
if mvhd.Duration != actualTrunDuration {
t.Errorf("MISMATCH: mvhd.Duration (%d) != actual trun sum (%d), diff = %d ms",
mvhd.Duration, actualTrunDuration, int64(mvhd.Duration)-int64(actualTrunDuration))
}
if parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration != 0 {
t.Errorf("MISMATCH: mdhd.Duration should be 0 for fragmented MP4, got %d",
parsedFile.Moov.Traks[0].Mdia.Mdhd.Duration)
}
}

View File

@@ -1,6 +1,7 @@
package webrtc
import (
"context"
"encoding/base64"
"encoding/json"
"io"
@@ -22,13 +23,105 @@ import (
pionMedia "github.com/pion/webrtc/v4/pkg/media"
)
var (
CandidatesMutex sync.Mutex
CandidateArrays map[string](chan string)
peerConnectionCount int64
peerConnections map[string]*pionWebRTC.PeerConnection
const (
// Channel buffer sizes
candidateChannelBuffer = 100
rtcpBufferSize = 1500
// Timeouts and intervals
keepAliveTimeout = 15 * time.Second
defaultTimeout = 10 * time.Second
// Track identifiers
trackStreamID = "kerberos-stream"
)
// ConnectionManager manages WebRTC peer connections in a thread-safe manner
type ConnectionManager struct {
mu sync.RWMutex
candidateChannels map[string]chan string
peerConnections map[string]*peerConnectionWrapper
peerConnectionCount int64
}
// peerConnectionWrapper wraps a peer connection with additional metadata
type peerConnectionWrapper struct {
conn *pionWebRTC.PeerConnection
cancelCtx context.CancelFunc
done chan struct{}
closeOnce sync.Once
}
var globalConnectionManager = NewConnectionManager()
// NewConnectionManager creates a new connection manager
func NewConnectionManager() *ConnectionManager {
return &ConnectionManager{
candidateChannels: make(map[string]chan string),
peerConnections: make(map[string]*peerConnectionWrapper),
}
}
// GetOrCreateCandidateChannel gets or creates a candidate channel for a session
func (cm *ConnectionManager) GetOrCreateCandidateChannel(sessionKey string) chan string {
cm.mu.Lock()
defer cm.mu.Unlock()
if ch, exists := cm.candidateChannels[sessionKey]; exists {
return ch
}
ch := make(chan string, candidateChannelBuffer)
cm.candidateChannels[sessionKey] = ch
return ch
}
// CloseCandidateChannel safely closes and removes a candidate channel
func (cm *ConnectionManager) CloseCandidateChannel(sessionKey string) {
cm.mu.Lock()
defer cm.mu.Unlock()
if ch, exists := cm.candidateChannels[sessionKey]; exists {
close(ch)
delete(cm.candidateChannels, sessionKey)
}
}
// AddPeerConnection adds a peer connection to the manager
func (cm *ConnectionManager) AddPeerConnection(sessionID string, wrapper *peerConnectionWrapper) {
cm.mu.Lock()
defer cm.mu.Unlock()
cm.peerConnections[sessionID] = wrapper
}
// RemovePeerConnection removes a peer connection from the manager
func (cm *ConnectionManager) RemovePeerConnection(sessionID string) {
cm.mu.Lock()
defer cm.mu.Unlock()
if wrapper, exists := cm.peerConnections[sessionID]; exists {
if wrapper.cancelCtx != nil {
wrapper.cancelCtx()
}
delete(cm.peerConnections, sessionID)
}
}
// GetPeerConnectionCount returns the current count of active peer connections
func (cm *ConnectionManager) GetPeerConnectionCount() int64 {
return atomic.LoadInt64(&cm.peerConnectionCount)
}
// IncrementPeerCount atomically increments the peer connection count
func (cm *ConnectionManager) IncrementPeerCount() int64 {
return atomic.AddInt64(&cm.peerConnectionCount, 1)
}
// DecrementPeerCount atomically decrements the peer connection count
func (cm *ConnectionManager) DecrementPeerCount() int64 {
return atomic.AddInt64(&cm.peerConnectionCount, -1)
}
type WebRTC struct {
Name string
StunServers []string
@@ -46,7 +139,7 @@ func CreateWebRTC(name string, stunServers []string, turnServers []string, turnS
TurnServers: turnServers,
TurnServersUsername: turnServersUsername,
TurnServersCredential: turnServersCredential,
Timer: time.NewTimer(time.Second * 10),
Timer: time.NewTimer(defaultTimeout),
}
}
@@ -68,19 +161,14 @@ func (w WebRTC) CreateOffer(sd []byte) pionWebRTC.SessionDescription {
}
func RegisterCandidates(key string, candidate models.ReceiveHDCandidatesPayload) {
// Set lock
CandidatesMutex.Lock()
_, ok := CandidateArrays[key]
if !ok {
CandidateArrays[key] = make(chan string, 100)
}
log.Log.Info("webrtc.main.HandleReceiveHDCandidates(): " + candidate.Candidate)
ch := globalConnectionManager.GetOrCreateCandidateChannel(key)
log.Log.Info("webrtc.main.RegisterCandidates(): " + candidate.Candidate)
select {
case CandidateArrays[key] <- candidate.Candidate:
case ch <- candidate.Candidate:
default:
log.Log.Info("webrtc.main.HandleReceiveHDCandidates(): channel is full.")
log.Log.Info("webrtc.main.RegisterCandidates(): channel is full, dropping candidate")
}
CandidatesMutex.Unlock()
}
func RegisterDefaultInterceptors(mediaEngine *pionWebRTC.MediaEngine, interceptorRegistry *interceptor.Registry) error {
@@ -107,12 +195,7 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
// We create a channel which will hold the candidates for this session.
sessionKey := config.Key + "/" + handshake.SessionID
CandidatesMutex.Lock()
_, ok := CandidateArrays[sessionKey]
if !ok {
CandidateArrays[sessionKey] = make(chan string, 100)
}
CandidatesMutex.Unlock()
candidateChannel := globalConnectionManager.GetOrCreateCandidateChannel(sessionKey)
// Set variables
hubKey := handshake.HubKey
@@ -178,81 +261,128 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
if err == nil && peerConnection != nil {
var videoSender *pionWebRTC.RTPSender = nil
if videoSender, err = peerConnection.AddTrack(videoTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while adding video track: " + err.Error())
// Create context for this connection
ctx, cancel := context.WithCancel(context.Background())
wrapper := &peerConnectionWrapper{
conn: peerConnection,
cancelCtx: cancel,
done: make(chan struct{}),
}
var videoSender *pionWebRTC.RTPSender = nil
if videoTrack != nil {
if videoSender, err = peerConnection.AddTrack(videoTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding video track: " + err.Error())
cancel()
return
}
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): video track is nil, skipping video")
}
// Read incoming RTCP packets
// Before these packets are returned they are processed by interceptors. For things
// like NACK this needs to be called.
go func() {
rtcpBuf := make([]byte, 1500)
for {
if _, _, rtcpErr := videoSender.Read(rtcpBuf); rtcpErr != nil {
return
if videoSender != nil {
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): video RTCP reader stopped")
}()
rtcpBuf := make([]byte, rtcpBufferSize)
for {
select {
case <-ctx.Done():
return
default:
if _, _, rtcpErr := videoSender.Read(rtcpBuf); rtcpErr != nil {
return
}
}
}
}
}()
}()
}
var audioSender *pionWebRTC.RTPSender = nil
if audioSender, err = peerConnection.AddTrack(audioTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while adding audio track: " + err.Error())
} // Read incoming RTCP packets
if audioTrack != nil {
if audioSender, err = peerConnection.AddTrack(audioTrack); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding audio track: " + err.Error())
cancel()
return
}
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): audio track is nil, skipping audio")
}
// Read incoming RTCP packets
// Before these packets are returned they are processed by interceptors. For things
// like NACK this needs to be called.
go func() {
rtcpBuf := make([]byte, 1500)
for {
if _, _, rtcpErr := audioSender.Read(rtcpBuf); rtcpErr != nil {
return
if audioSender != nil {
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): audio RTCP reader stopped")
}()
rtcpBuf := make([]byte, rtcpBufferSize)
for {
select {
case <-ctx.Done():
return
default:
if _, _, rtcpErr := audioSender.Read(rtcpBuf); rtcpErr != nil {
return
}
}
}
}
}()
}()
}
peerConnection.OnConnectionStateChange(func(connectionState pionWebRTC.PeerConnectionState) {
if connectionState == pionWebRTC.PeerConnectionStateDisconnected || connectionState == pionWebRTC.PeerConnectionStateClosed {
// Set lock
CandidatesMutex.Lock()
atomic.AddInt64(&peerConnectionCount, -1)
_, ok := CandidateArrays[sessionKey]
if ok {
close(CandidateArrays[sessionKey])
delete(CandidateArrays, sessionKey)
}
// Not really needed.
//senders := peerConnection.GetSenders()
//for _, sender := range senders {
// if err := peerConnection.RemoveTrack(sender); err != nil {
// log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while removing track: " + err.Error())
// }
//}
if err := peerConnection.Close(); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while closing peer connection: " + err.Error())
}
peerConnections[handshake.SessionID] = nil
delete(peerConnections, handshake.SessionID)
CandidatesMutex.Unlock()
} else if connectionState == pionWebRTC.PeerConnectionStateConnected {
CandidatesMutex.Lock()
atomic.AddInt64(&peerConnectionCount, 1)
CandidatesMutex.Unlock()
} else if connectionState == pionWebRTC.PeerConnectionStateFailed {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICEConnectionStateFailed")
}
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): connection state changed to: " + connectionState.String())
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Number of peers connected (" + strconv.FormatInt(peerConnectionCount, 10) + ")")
switch connectionState {
case pionWebRTC.PeerConnectionStateDisconnected, pionWebRTC.PeerConnectionStateClosed:
wrapper.closeOnce.Do(func() {
count := globalConnectionManager.DecrementPeerCount()
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Peer disconnected. Active peers: " + string(rune(count)))
// Clean up resources
globalConnectionManager.CloseCandidateChannel(sessionKey)
if err := peerConnection.Close(); err != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error closing peer connection: " + err.Error())
}
globalConnectionManager.RemovePeerConnection(handshake.SessionID)
close(wrapper.done)
})
case pionWebRTC.PeerConnectionStateConnected:
count := globalConnectionManager.IncrementPeerCount()
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Peer connected. Active peers: " + string(rune(count)))
case pionWebRTC.PeerConnectionStateFailed:
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE connection failed")
}
})
go func() {
defer func() {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): candidate processor stopped for session: " + handshake.SessionID)
}()
// Iterate over the candidates and send them to the remote client
// Non blocking channe
for candidate := range CandidateArrays[sessionKey] {
CandidatesMutex.Lock()
log.Log.Info(">>>> webrtc.main.InitializeWebRTCConnection(): Received candidate from channel: " + candidate)
if candidateErr := peerConnection.AddICECandidate(pionWebRTC.ICECandidateInit{Candidate: string(candidate)}); candidateErr != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): something went wrong while adding candidate: " + candidateErr.Error())
for {
select {
case <-ctx.Done():
return
case candidate, ok := <-candidateChannel:
if !ok {
return
}
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): Received candidate from channel: " + candidate)
if candidateErr := peerConnection.AddICECandidate(pionWebRTC.ICECandidateInit{Candidate: candidate}); candidateErr != nil {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): error adding candidate: " + candidateErr.Error())
}
}
CandidatesMutex.Unlock()
}
}()
@@ -270,21 +400,56 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
// When an ICE candidate is available send to the other peer using the signaling server (MQTT).
// The other peer will add this candidate by calling AddICECandidate
var hasRelayCandidates bool
peerConnection.OnICECandidate(func(candidate *pionWebRTC.ICECandidate) {
if candidate == nil {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE gathering complete (candidate is nil)")
if !hasRelayCandidates {
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): WARNING - No TURN (relay) candidates were gathered! TURN servers: " +
config.TURNURI + ", Username: " + config.TURNUsername + ", ForceTurn: " + config.ForceTurn)
}
return
}
// Log candidate details for debugging
candidateJSON := candidate.ToJSON()
candidateStr := candidateJSON.Candidate
// Determine candidate type from the candidate string
candidateType := "unknown"
if candidateJSON.Candidate != "" {
switch candidate.Typ {
case pionWebRTC.ICECandidateTypeRelay:
candidateType = "relay"
case pionWebRTC.ICECandidateTypeSrflx:
candidateType = "srflx"
case pionWebRTC.ICECandidateTypeHost:
candidateType = "host"
case pionWebRTC.ICECandidateTypePrflx:
candidateType = "prflx"
}
}
// Track if we received any relay (TURN) candidates
if candidateType == "relay" {
hasRelayCandidates = true
}
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): ICE candidate received - Type: " + candidateType +
", Candidate: " + candidateStr)
// Create a config map
valueMap := make(map[string]interface{})
candateJSON := candidate.ToJSON()
candateBinary, err := json.Marshal(candateJSON)
candateBinary, err := json.Marshal(candidateJSON)
if err == nil {
valueMap["candidate"] = string(candateBinary)
valueMap["sdp"] = []byte(base64.StdEncoding.EncodeToString([]byte(answer.SDP)))
// SDP is not needed to be send..
//valueMap["sdp"] = []byte(base64.StdEncoding.EncodeToString([]byte(answer.SDP)))
valueMap["session_id"] = handshake.SessionID
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): sending " + candidateType + " candidate to hub")
} else {
log.Log.Info("webrtc.main.InitializeWebRTCConnection(): something went wrong while marshalling candidate: " + err.Error())
log.Log.Error("webrtc.main.InitializeWebRTCConnection(): failed to marshal candidate: " + err.Error())
}
// We'll send the candidate to the hub
@@ -304,8 +469,8 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
}
})
// Create a channel which will be used to send candidates to the other peer
peerConnections[handshake.SessionID] = peerConnection
// Store peer connection in manager
globalConnectionManager.AddPeerConnection(handshake.SessionID, wrapper)
if err == nil {
// Create a config map
@@ -338,7 +503,11 @@ func InitializeWebRTCConnection(configuration *models.Configuration, communicati
func NewVideoTrack(streams []packets.Stream) *pionWebRTC.TrackLocalStaticSample {
mimeType := pionWebRTC.MimeTypeH264
outboundVideoTrack, _ := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "video", "pion124")
outboundVideoTrack, err := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "video", trackStreamID)
if err != nil {
log.Log.Error("webrtc.main.NewVideoTrack(): error creating video track: " + err.Error())
return nil
}
return outboundVideoTrack
}
@@ -353,155 +522,245 @@ func NewAudioTrack(streams []packets.Stream) *pionWebRTC.TrackLocalStaticSample
mimeType = pionWebRTC.MimeTypePCMA
}
}
outboundAudioTrack, _ := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "audio", "pion124")
if mimeType == "" {
log.Log.Error("webrtc.main.NewAudioTrack(): no supported audio codec found")
return nil
}
outboundAudioTrack, err := pionWebRTC.NewTrackLocalStaticSample(pionWebRTC.RTPCodecCapability{MimeType: mimeType}, "audio", trackStreamID)
if err != nil {
log.Log.Error("webrtc.main.NewAudioTrack(): error creating audio track: " + err.Error())
return nil
}
return outboundAudioTrack
}
// streamState holds state information for the streaming process
type streamState struct {
lastKeepAlive int64
peerCount int64
start bool
receivedKeyFrame bool
lastAudioSample *pionMedia.Sample
lastVideoSample *pionMedia.Sample
}
// codecSupport tracks which codecs are available in the stream
type codecSupport struct {
hasH264 bool
hasPCM_MULAW bool
hasAAC bool
hasOpus bool
}
// detectCodecs examines the stream to determine which codecs are available
func detectCodecs(rtspClient capture.RTSPClient) codecSupport {
support := codecSupport{}
streams, _ := rtspClient.GetStreams()
for _, stream := range streams {
switch stream.Name {
case "H264":
support.hasH264 = true
case "PCM_MULAW":
support.hasPCM_MULAW = true
case "AAC":
support.hasAAC = true
case "OPUS":
support.hasOpus = true
}
}
return support
}
// hasValidCodecs checks if at least one valid video or audio codec is present
func (cs codecSupport) hasValidCodecs() bool {
hasVideo := cs.hasH264
hasAudio := cs.hasPCM_MULAW || cs.hasAAC || cs.hasOpus
return hasVideo || hasAudio
}
// shouldContinueStreaming determines if streaming should continue based on keepalive and peer count
func shouldContinueStreaming(config models.Config, state *streamState) bool {
if config.Capture.ForwardWebRTC != "true" {
return true
}
now := time.Now().Unix()
hasTimedOut := (now - state.lastKeepAlive) > int64(keepAliveTimeout.Seconds())
hasNoPeers := state.peerCount == 0
return !hasTimedOut && !hasNoPeers
}
// updateStreamState updates keepalive and peer count from communication channels
func updateStreamState(communication *models.Communication, state *streamState) {
select {
case keepAliveStr := <-communication.HandleLiveHDKeepalive:
if val, err := strconv.ParseInt(keepAliveStr, 10, 64); err == nil {
state.lastKeepAlive = val
}
default:
}
select {
case peerCountStr := <-communication.HandleLiveHDPeers:
if val, err := strconv.ParseInt(peerCountStr, 10, 64); err == nil {
state.peerCount = val
}
default:
}
}
// writeFinalSamples writes any remaining buffered samples
func writeFinalSamples(state *streamState, videoTrack, audioTrack *pionWebRTC.TrackLocalStaticSample) {
if state.lastVideoSample != nil && videoTrack != nil {
if err := videoTrack.WriteSample(*state.lastVideoSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.writeFinalSamples(): error writing final video sample: " + err.Error())
}
}
if state.lastAudioSample != nil && audioTrack != nil {
if err := audioTrack.WriteSample(*state.lastAudioSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.writeFinalSamples(): error writing final audio sample: " + err.Error())
}
}
}
// processVideoPacket processes a video packet and writes samples to the track
func processVideoPacket(pkt packets.Packet, state *streamState, videoTrack *pionWebRTC.TrackLocalStaticSample, config models.Config) {
if videoTrack == nil {
return
}
// Start at the first keyframe
if pkt.IsKeyFrame {
state.start = true
}
if !state.start {
return
}
sample := pionMedia.Sample{Data: pkt.Data, PacketTimestamp: uint32(pkt.Time)}
if config.Capture.ForwardWebRTC == "true" {
// Remote forwarding not yet implemented
log.Log.Debug("webrtc.main.processVideoPacket(): remote forwarding not implemented")
return
}
if state.lastVideoSample != nil {
duration := sample.PacketTimestamp - state.lastVideoSample.PacketTimestamp
state.lastVideoSample.Duration = time.Duration(duration) * time.Millisecond
if err := videoTrack.WriteSample(*state.lastVideoSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.processVideoPacket(): error writing video sample: " + err.Error())
}
}
state.lastVideoSample = &sample
}
// processAudioPacket processes an audio packet and writes samples to the track
func processAudioPacket(pkt packets.Packet, state *streamState, audioTrack *pionWebRTC.TrackLocalStaticSample, hasAAC bool) {
if audioTrack == nil {
return
}
if hasAAC {
// AAC transcoding not yet implemented
// TODO: Implement AAC to PCM_MULAW transcoding
return
}
sample := pionMedia.Sample{Data: pkt.Data, PacketTimestamp: uint32(pkt.Time)}
if state.lastAudioSample != nil {
duration := sample.PacketTimestamp - state.lastAudioSample.PacketTimestamp
state.lastAudioSample.Duration = time.Duration(duration) * time.Millisecond
if err := audioTrack.WriteSample(*state.lastAudioSample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.processAudioPacket(): error writing audio sample: " + err.Error())
}
}
state.lastAudioSample = &sample
}
func WriteToTrack(livestreamCursor *packets.QueueCursor, configuration *models.Configuration, communication *models.Communication, mqttClient mqtt.Client, videoTrack *pionWebRTC.TrackLocalStaticSample, audioTrack *pionWebRTC.TrackLocalStaticSample, rtspClient capture.RTSPClient) {
config := configuration.Config
// Make peerconnection map
peerConnections = make(map[string]*pionWebRTC.PeerConnection)
// Set the indexes for the video & audio streams
// Later when we read a packet we need to figure out which track to send it to.
hasH264 := false
hasPCM_MULAW := false
hasAAC := false
hasOpus := false
streams, _ := rtspClient.GetStreams()
for _, stream := range streams {
if stream.Name == "H264" {
hasH264 = true
} else if stream.Name == "PCM_MULAW" {
hasPCM_MULAW = true
} else if stream.Name == "AAC" {
hasAAC = true
} else if stream.Name == "OPUS" {
hasOpus = true
}
// Check if at least one track is available
if videoTrack == nil && audioTrack == nil {
log.Log.Error("webrtc.main.WriteToTrack(): both video and audio tracks are nil, cannot proceed")
return
}
if !hasH264 && !hasPCM_MULAW && !hasAAC && !hasOpus {
log.Log.Error("webrtc.main.WriteToTrack(): no valid video codec and audio codec found.")
} else {
if config.Capture.TranscodingWebRTC == "true" {
// Todo..
} else {
//log.Log.Info("webrtc.main.WriteToTrack(): not using a transcoder.")
}
// Detect available codecs
codecs := detectCodecs(rtspClient)
var cursorError error
var pkt packets.Packet
var previousTimeVideo int64
var previousTimeAudio int64
start := false
receivedKeyFrame := false
lastKeepAlive := "0"
peerCount := "0"
for cursorError == nil {
pkt, cursorError = livestreamCursor.ReadPacket()
//if config.Capture.ForwardWebRTC != "true" && peerConnectionCount == 0 {
// start = false
// receivedKeyFrame = false
// continue
//}
select {
case lastKeepAlive = <-communication.HandleLiveHDKeepalive:
default:
}
select {
case peerCount = <-communication.HandleLiveHDPeers:
default:
}
now := time.Now().Unix()
lastKeepAliveN, _ := strconv.ParseInt(lastKeepAlive, 10, 64)
hasTimedOut := (now - lastKeepAliveN) > 15 // if longer then no response in 15 sec.
hasNoPeers := peerCount == "0"
if config.Capture.ForwardWebRTC == "true" && (hasTimedOut || hasNoPeers) {
start = false
receivedKeyFrame = false
continue
}
if len(pkt.Data) == 0 || pkt.Data == nil {
receivedKeyFrame = false
continue
}
if !receivedKeyFrame {
if pkt.IsKeyFrame {
receivedKeyFrame = true
} else {
continue
}
}
//if config.Capture.TranscodingWebRTC == "true" {
// We will transcode the video
// TODO..
//}
if pkt.IsVideo {
// Calculate the difference
bufferDuration := pkt.Time - previousTimeVideo
previousTimeVideo = pkt.Time
// Start at the first keyframe
if pkt.IsKeyFrame {
start = true
}
if start {
bufferDurationCasted := time.Duration(bufferDuration) * time.Millisecond
sample := pionMedia.Sample{Data: pkt.Data, Duration: bufferDurationCasted, PacketTimestamp: uint32(pkt.Time)}
//sample = pionMedia.Sample{Data: pkt.Data, Duration: time.Second}
if config.Capture.ForwardWebRTC == "true" {
// We will send the video to a remote peer
// TODO..
} else {
if err := videoTrack.WriteSample(sample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.WriteToTrack(): something went wrong while writing sample: " + err.Error())
}
}
}
} else if pkt.IsAudio {
// @TODO: We need to check if the audio is PCM_MULAW or AAC
// If AAC we need to transcode it to PCM_MULAW
// If PCM_MULAW we can send it directly.
if hasAAC {
// We will transcode the audio
// TODO..
//d := fdkaac.NewAacDecoder()
continue
}
// Calculate the difference
bufferDuration := pkt.Time - previousTimeAudio
previousTimeAudio = pkt.Time
// We will send the audio
bufferDurationCasted := time.Duration(bufferDuration) * time.Millisecond
sample := pionMedia.Sample{Data: pkt.Data, Duration: bufferDurationCasted, PacketTimestamp: uint32(pkt.Time)}
//sample = pionMedia.Sample{Data: pkt.Data, Duration: time.Second}
if err := audioTrack.WriteSample(sample); err != nil && err != io.ErrClosedPipe {
log.Log.Error("webrtc.main.WriteToTrack(): something went wrong while writing sample: " + err.Error())
}
}
}
if !codecs.hasValidCodecs() {
log.Log.Error("webrtc.main.WriteToTrack(): no valid video or audio codec found")
return
}
peerConnectionCount = 0
log.Log.Info("webrtc.main.WriteToTrack(): stop writing to track.")
if config.Capture.TranscodingWebRTC == "true" {
log.Log.Info("webrtc.main.WriteToTrack(): transcoding enabled but not yet implemented")
}
// Initialize streaming state
state := &streamState{
lastKeepAlive: time.Now().Unix(),
peerCount: 0,
}
defer func() {
writeFinalSamples(state, videoTrack, audioTrack)
log.Log.Info("webrtc.main.WriteToTrack(): stopped writing to track")
}()
var pkt packets.Packet
var cursorError error
for cursorError == nil {
pkt, cursorError = livestreamCursor.ReadPacket()
if cursorError != nil {
break
}
// Update state from communication channels
updateStreamState(communication, state)
// Check if we should continue streaming
if !shouldContinueStreaming(config, state) {
state.start = false
state.receivedKeyFrame = false
continue
}
// Skip empty packets
if len(pkt.Data) == 0 || pkt.Data == nil {
state.receivedKeyFrame = false
continue
}
// Wait for first keyframe before processing
if !state.receivedKeyFrame {
if pkt.IsKeyFrame {
state.receivedKeyFrame = true
} else {
continue
}
}
// Process video or audio packets
if pkt.IsVideo {
processVideoPacket(pkt, state, videoTrack, config)
} else if pkt.IsAudio {
processAudioPacket(pkt, state, audioTrack, codecs.hasAAC)
}
}
}

View File

@@ -9,7 +9,7 @@
},
"navigation": {
"profile": "Profile",
"admin": "admin",
"admin": "Admin",
"management": "Management",
"dashboard": "Dashboard",
"recordings": "Recordings",
@@ -32,11 +32,11 @@
"latest_events": "Latest events",
"configure_connection": "Configure connection",
"no_events": "No events",
"no_events_description": "No recordings where found, make sure your Kerberos Agent is properly configured.",
"no_events_description": "No recordings were found, make sure your Agent is properly configured.",
"motion_detected": "Motion was detected",
"live_view": "Live view",
"loading_live_view": "Loading live view",
"loading_live_view_description": "Hold on we are loading your live view here. If you didn't configure your camera connection, update it on the settings pages.",
"loading_live_view_description": "Hold on, we are loading your live view here. If you didn't configure your camera connection, update it on the settings pages.",
"time": "Time",
"description": "Description",
"name": "Name"
@@ -59,32 +59,32 @@
"persistence": "Persistence"
},
"info": {
"kerberos_hub_demo": "Have a look at our Kerberos Hub demo environment, to see Kerberos Hub in action!",
"configuration_updated_success": "Your configuration have been updated successfully.",
"kerberos_hub_demo": "Have a look at our Hub demo environment, to see Hub in action!",
"configuration_updated_success": "Your configuration has been updated successfully.",
"configuration_updated_error": "Something went wrong while saving.",
"verify_hub": "Verifying your Kerberos Hub settings.",
"verify_hub_success": "Kerberos Hub settings are successfully verified.",
"verify_hub_error": "Something went wrong while verifying Kerberos Hub",
"verify_hub": "Verifying your Hub settings.",
"verify_hub_success": "Hub settings are successfully verified.",
"verify_hub_error": "Something went wrong while verifying Hub.",
"verify_persistence": "Verifying your persistence settings.",
"verify_persistence_success": "Persistence settings are successfully verified.",
"verify_persistence_error": "Something went wrong while verifying the persistence",
"verify_persistence_error": "Something went wrong while verifying the persistence.",
"verify_camera": "Verifying your camera settings.",
"verify_camera_success": "Camera settings are successfully verified.",
"verify_camera_error": "Something went wrong while verifying the camera settings",
"verify_camera_error": "Something went wrong while verifying the camera settings.",
"verify_onvif": "Verifying your ONVIF settings.",
"verify_onvif_success": "ONVIF settings are successfully verified.",
"verify_onvif_error": "Something went wrong while verifying the ONVIF settings"
"verify_onvif_error": "Something went wrong while verifying the ONVIF settings."
},
"overview": {
"general": "General",
"description_general": "General settings for your Kerberos Agent",
"description_general": "General settings for your Agent",
"key": "Key",
"camera_name": "Camera name",
"camera_friendly_name": "Friendly name",
"timezone": "Timezone",
"select_timezone": "Select a timezone",
"advanced_configuration": "Advanced configuration",
"description_advanced_configuration": "Detailed configuration options to enable or disable specific parts of the Kerberos Agent",
"description_advanced_configuration": "Detailed configuration options to enable or disable specific parts of the Agent",
"offline_mode": "Offline mode",
"description_offline_mode": "Disable all outgoing traffic",
"encryption": "Encryption",
@@ -101,9 +101,9 @@
"camera": "Camera",
"description_camera": "Camera settings are required to make a connection to your camera of choice.",
"only_h264": "Currently only H264/H265 RTSP streams are supported.",
"rtsp_url": "RTSP url",
"rtsp_url": "RTSP URL",
"rtsp_h264": "A H264/H265 RTSP connection to your camera.",
"sub_rtsp_url": "Sub RTSP url (used for livestreaming)",
"sub_rtsp_url": "Sub RTSP URL (used for livestreaming)",
"sub_rtsp_h264": "A secondary RTSP connection to the low resolution of your camera.",
"onvif": "ONVIF",
"description_onvif": "Credentials to communicate with ONVIF capabilities. These are used for PTZ or other capabilities provided by the camera.",
@@ -115,28 +115,28 @@
},
"recording": {
"recording": "Recording",
"description_recording": "Specify how you would like to make recordings. Having a continuous 24/7 setup or a motion based recording.",
"description_recording": "Specify how you would like to make recordings. Having a continuous 24/7 setup or a motion-based recording.",
"continuous_recording": "Continuous recording",
"description_continuous_recording": "Make 24/7 or motion based recordings.",
"max_duration": "max video duration (seconds)",
"description_continuous_recording": "Make 24/7 or motion-based recordings.",
"max_duration": "Max video duration (seconds)",
"description_max_duration": "The maximum duration of a recording.",
"pre_recording": "pre recording (key frames buffered)",
"pre_recording": "Pre recording (key frames buffered)",
"description_pre_recording": "Seconds before an event occurred.",
"post_recording": "post recording (seconds)",
"post_recording": "Post recording (seconds)",
"description_post_recording": "Seconds after an event occurred.",
"threshold": "Recording threshold (pixels)",
"description_threshold": "The number of pixels changed to record",
"description_threshold": "The number of pixels changed to record.",
"autoclean": "Auto clean",
"description_autoclean": "Specify if the Kerberos Agent can cleanup recordings when a specific storage capacity (MB) is reached. This will remove the oldest recordings when the capacity is reached.",
"description_autoclean": "Specify if the Agent can clean up recordings when a specific storage capacity (MB) is reached. This will remove the oldest recordings when the capacity is reached.",
"autoclean_enable": "Enable auto clean",
"autoclean_description_enable": "Remove oldest recording when capacity reached.",
"autoclean_max_directory_size": "Maximum directory size (MB)",
"autoclean_description_max_directory_size": "The maximum MB's of recordings stored.",
"autoclean_description_max_directory_size": "The maximum MBs of recordings stored.",
"fragmentedrecordings": "Fragmented recordings",
"description_fragmentedrecordings": "When recordings are fragmented they are suitable for an HLS stream. When turned on the MP4 container will look a bit different.",
"description_fragmentedrecordings": "When recordings are fragmented they are suitable for an HLS stream. When turned on, the MP4 container will look a bit different.",
"fragmentedrecordings_enable": "Enable fragmentation",
"fragmentedrecordings_description_enable": "Fragmented recordings are required for HLS.",
"fragmentedrecordings_duration": "fragment duration",
"fragmentedrecordings_duration": "Fragment duration",
"fragmentedrecordings_description_duration": "Duration of a single fragment."
},
"streaming": {
@@ -149,16 +149,16 @@
"force_turn": "Force TURN",
"force_turn_description": "Force TURN usage, even when STUN is available.",
"stun_turn_forward": "Forwarding and transcoding",
"stun_turn_description_forward": "Optimisations and enhancements for TURN/STUN communication.",
"stun_turn_description_forward": "Optimizations and enhancements for TURN/STUN communication.",
"stun_turn_webrtc": "Forwarding to WebRTC broker",
"stun_turn_description_webrtc": "Forward h264 stream through MQTT",
"stun_turn_description_webrtc": "Forward H264 stream through MQTT",
"stun_turn_transcode": "Transcode stream",
"stun_turn_description_transcode": "Convert stream to a lower resolution",
"stun_turn_downscale": "Downscale resolution (in % of original resolution)",
"mqtt": "MQTT",
"description_mqtt": "A MQTT broker is used to communicate from",
"description2_mqtt": "to the Kerberos Agent, to achieve for example livestreaming or ONVIF (PTZ) capabilities.",
"mqtt_brokeruri": "Broker Uri",
"description_mqtt": "An MQTT broker is used to communicate from",
"description2_mqtt": "to the Agent, to achieve for example livestreaming or ONVIF (PTZ) capabilities.",
"mqtt_brokeruri": "Broker URI",
"mqtt_username": "Username",
"mqtt_password": "Password",
"realtimeprocessing": "Realtime Processing",
@@ -180,57 +180,61 @@
"friday": "Friday",
"saturday": "Saturday",
"externalcondition": "External Condition",
"description_externalcondition": "Depending on an external webservice recording can be enabled or disabled.",
"description_externalcondition": "Depending on an external web service, recording can be enabled or disabled.",
"regionofinterest": "Region Of Interest",
"description_regionofinterest": "By defining one or more regions, motion will be tracked only in the regions you have defined."
},
"persistence": {
"kerberoshub": "Kerberos Hub",
"description_kerberoshub": "Kerberos Agents can send heartbeats to a central",
"description2_kerberoshub": "installation. Heartbeats and other relevant information are synced to Kerberos Hub to show realtime information about your video landscape.",
"kerberoshub": "Hub",
"description_kerberoshub": "Agents can send heartbeats to a central",
"description2_kerberoshub": "installation. Heartbeats and other relevant information are synced to Hub to show realtime information about your video landscape.",
"persistence": "Persistence",
"secondary_persistence": "Secondary Persistence",
"description_secondary_persistence": "Recordings will be sent to secondary persistence if the primary persistence is unavailable or fails. This can be useful for failover purposes.",
"saasoffering": "Kerberos Hub (SAAS offering)",
"saasoffering": "Hub (SaaS offering)",
"description_persistence": "Having the ability to store your recordings is the beginning of everything. You can choose between our",
"description2_persistence": ", or a 3rd party provider",
"select_persistence": "Select a persistence",
"kerberoshub_encryption": "Encryption",
"kerberoshub_encryption_description": "All traffic from/to Kerberos Hub will encrypted using AES-256.",
"kerberoshub_proxyurl": "Kerberos Hub Proxy URL",
"kerberoshub_encryption_description": "All traffic from/to Hub will be encrypted using AES-256.",
"kerberoshub_proxyurl": "Hub Proxy URL",
"kerberoshub_description_proxyurl": "The Proxy endpoint for uploading your recordings.",
"kerberoshub_apiurl": "Kerberos Hub API URL",
"kerberoshub_apiurl": "Hub API URL",
"kerberoshub_description_apiurl": "The API endpoint for uploading your recordings.",
"kerberoshub_publickey": "Public key",
"kerberoshub_description_publickey": "The public key granted to your Kerberos Hub account.",
"kerberoshub_description_publickey": "The public key granted to your Hub account.",
"kerberoshub_privatekey": "Private key",
"kerberoshub_description_privatekey": "The private key granted to your Kerberos Hub account.",
"kerberoshub_description_privatekey": "The private key granted to your Hub account.",
"kerberoshub_site": "Site",
"kerberoshub_description_site": "The site ID the Kerberos Agents are belonging to in Kerberos Hub.",
"kerberoshub_description_site": "The site ID the Agents belong to in Hub.",
"kerberoshub_region": "Region",
"kerberoshub_description_region": "The region we are storing our recordings in.",
"kerberoshub_bucket": "Bucket",
"kerberoshub_description_bucket": "The bucket we are storing our recordings in.",
"kerberoshub_username": "Username/Directory (should match Kerberos Hub username)",
"kerberoshub_description_username": "The username of your Kerberos Hub account.",
"kerberosvault_apiurl": "Kerberos Vault API URL",
"kerberosvault_description_apiurl": "The Kerberos Vault API",
"kerberoshub_username": "Username/Directory (should match Hub username)",
"kerberoshub_description_username": "The username of your Hub account.",
"kerberosvault_apiurl": "Vault API URL",
"kerberosvault_description_apiurl": "The Vault API",
"kerberosvault_provider": "Provider",
"kerberosvault_description_provider": "The provider to which your recordings will be send.",
"kerberosvault_directory": "Directory (should match Kerberos Hub username)",
"kerberosvault_description_directory": "Sub directory the recordings will be stored in your provider.",
"kerberosvault_description_provider": "The provider to which your recordings will be sent.",
"kerberosvault_directory": "Directory (should match Hub username)",
"kerberosvault_description_directory": "Subdirectory the recordings will be stored in your provider.",
"kerberosvault_accesskey": "Access key",
"kerberosvault_description_accesskey": "The access key of your Kerberos Vault account.",
"kerberosvault_description_accesskey": "The access key of your Vault account.",
"kerberosvault_secretkey": "Secret key",
"kerberosvault_description_secretkey": "The secret key of your Kerberos Vault account.",
"kerberosvault_description_secretkey": "The secret key of your Vault account.",
"kerberosvault_maxretries": "Max retries",
"kerberosvault_description_maxretries": "The maximum number of retries to upload a recording.",
"kerberosvault_timeout": "Timeout",
"kerberosvault_description_timeout": "If a timeout occurs, recordings will be sent directly to the secondary Vault.",
"dropbox_directory": "Directory",
"dropbox_description_directory": "The sub directory where the recordings will be stored in your Dropbox account.",
"dropbox_description_directory": "The subdirectory where the recordings will be stored in your Dropbox account.",
"dropbox_accesstoken": "Access token",
"dropbox_description_accesstoken": "The access token of your Dropbox account/app.",
"verify_connection": "Verify Connection",
"remove_after_upload": "Once recordings are uploaded to some persistence, you might want to remove them from the local Kerberos Agent.",
"remove_after_upload": "Once recordings are uploaded to some persistence, you might want to remove them from the local Agent.",
"remove_after_upload_description": "Remove recordings after they are uploaded successfully.",
"remove_after_upload_enabled": "Enabled delete on upload"
"remove_after_upload_enabled": "Enable delete on upload"
}
}
}
}

View File

@@ -2536,6 +2536,43 @@ class Settings extends React.Component {
)
}
/>
<Input
noPadding
label={t(
'settings.persistence.kerberosvault_maxretries'
)}
placeholder={t(
'settings.persistence.kerberosvault_description_maxretries'
)}
value={
config.kstorage ? config.kstorage.max_retries : ''
}
onChange={(value) =>
this.onUpdateField(
'kstorage',
'max_retries',
value,
config.kstorage
)
}
/>
<Input
noPadding
label={t('settings.persistence.kerberosvault_timeout')}
placeholder={t(
'settings.persistence.kerberosvault_description_timeout'
)}
value={config.kstorage ? config.kstorage.timeout : ''}
onChange={(value) =>
this.onUpdateField(
'kstorage',
'timeout',
value,
config.kstorage
)
}
/>
</>
)}
{config.cloud === this.DROPBOX && (

View File

@@ -1715,10 +1715,10 @@
"@jridgewell/resolve-uri" "^3.0.3"
"@jridgewell/sourcemap-codec" "^1.4.10"
"@kerberos-io/ui@^1.71.0":
version "1.71.0"
resolved "https://registry.yarnpkg.com/@kerberos-io/ui/-/ui-1.71.0.tgz#06914c94e8b0982068d2099acf8158917a511bfc"
integrity sha512-pHCTn/iQTcQEPoCK82eJHGRn6BgzW3wgV4C+mNqdKOtLTquxL+vh7molEgC66tl3DGf7HyjSNa8LuoxYbt9TEg==
"@kerberos-io/ui@^1.76.0":
version "1.77.0"
resolved "https://registry.yarnpkg.com/@kerberos-io/ui/-/ui-1.77.0.tgz#b748b2a9abf793ff2a9ba64ee41f84debc0ca9dc"
integrity sha512-CHh4jeLKwrYvJRL5PM3UEN4p2k1fqwMKgSF2U6IR4v0fE2FwPc/2Ry4zGk6pvLDFHbDpR9jUkHX+iNphvStoyQ==
dependencies:
"@emotion/react" "^11.10.4"
"@emotion/styled" "^11.10.4"