lsb_release command is not always present, even in Ubuntu, wheresas
/etc/lsb-release file is but it isn't necessarily in all distros,
eg. Arch doesn't have it by default.
+ install libdbus-1-dev dependency - not needed for the CI because there
it is at this point already installed but the script can be used also
to setup the environment outside GitHub CI
BMD API starting with 12.0 support 64 channels so enable it when using
eligible SDK.
UG ships currently 11.6 so it won't be enabled by default but user can
ship own version.
+ replaced unneeded checks in display_decklink_reconfigure_audio
with assert (not needed, this should not happen because the format is
negotiated with `get_property(AUDIO_FORMAT)`` first)
As for now, it breaks things when using OPUS, eg on DeckLink 4K Extrme:
```
uv -t testcard:mode=Hi59 -s embedded -A OPUS -d decklink:sync -r embedded
```
Poorly synchronized input, especially when there is a timing jitter,
like sdl_mixer will stop to be played properly, but this isn't actually
purpose of this mode so it isn't perhaps a good idea to complicate the
things by adding some error-prone stuff.
When probing for the internal format, multiple reconfigure messages may
be emitted (because multiple frames might have been processed, as needed
for inter-frame compressions). Thus we need to check and store our
internal format instead of forcing reconfigure (that has been needed,
since the video_desc of the stream hasn't changed).
Display should not be reconfigured when not needed, otherwise it will be
reconfigured twice even when received single compressed stream (because
reconf is run twice here - once as probe and second real).
Therefore the display description is stored in decoder->display_desc and
compared with the actual desc. But it doesn't work with compresed stream,
because display_desc.color_spec is set to the network desc and rewritten
to the correct uncompressed color-spec just after the if. So eg.:
- stored desc is "1920x1080 @50.00i, codec UYVY"; **but** compared to:
- actual "1920x1080 @50.00i, codec MJPEG"
Keep own copy of original and working (with wraparounds) audio TS in
order not to fight with the video over the shared variable - reconfigure
only if there is a change.
modernized video display, audio playback and vo postprocess APIs
THe APIs were already recently updated so modernize it by using bool
where the return value is semantically boolean value. Using TRUE/FALSE is
inherently ambiguous because it is not obvious from the prototype if
success is 0 or TRUE (1).
Replace [no-]low-latency option with synchronized opt - the behavior
of the no-low-latency mode has changed, anyways, so as a benefit it is
more obvious.
When the timestamp difference is too large, do not sync at all and
schedule continuously. This has been already there but the difference
wasn't allowed to be more than 2000.
In scheduled mode, we need to restart stream after configuring audio. The
previous implemenation was rather error-prone because if the audio was
configured after video, it didn't work. Now it simplifies the workflow.
This also fixes a potential crash when DisableVideoOutput in reconfigure
was called after deleting the delegate structure.
When there is a missing frame, seq is incremented but the actual time
base may have diverged.
Similarly to dropped frame - one frame is elapsed but the time base
didn't reflect that, because it was dropped.
Set accurate timestamps for 1001-fractioned videos, where audio frame
sizes are swinging between different number of samples (aka 1601 and
1602 for 59.94i/29.97p video) so that the timestamp isn't exactly aligned
to the video timestamp.
video only, audio support will be readded later
Rewritten scheduled playback - instead of pushing the frames and guessing
the playback time as it has been before, store the frames inside and let
the completition callback pull when needed.
Send same RTP timestamps for audio/video frames captured at the same
time.
RTP specification allows this (although doesn't require - according to
the spec, the RTP timestamps should colaesce with RTCP NTP
correspondence). We slightly modify that in a way that we use for both
audio and video 90 kHz clock resolution, so if the start point is the
same (random), the RTP timestamps will be exactly the same for related
data.
Implemented for vidcap DeckLink and testcard.