\-latomic is needed to prevent errors:
```
undefined reference to `__atomic_load_8'
````
even if target_cpu is aarch64 but building on 32-bit Raspberry Pi OS
(which is still default).
\+ ignore leading AUD NALU if present (add own 4B start code instead) -
it could have produced problems when AUD+SPS+PPS is prepended to regular
frame that is not an IDR frame
The option solves a problem with streams that do not correctly prepend
video headers, namely (and currently only implemented) SPS/PPS in
H.264 stream. Support for HEVC can be added later.
This effectively reverts recent commit d70e2fb3 (from 15th Aug).
The use of parser seems to be a problem for UG workflow, in the end -
the point is that it may cache a packet until arival of next packet
(commit 6e9a4142), so in that commit, as a solution flushing with EOF
was added.
However, it seems to produce problems with simple (but not with H.264 parser):
```
uv -d gl -t testcard -c libavcodec:encoder=libx265
```
because the parser seems to be confused when parsing frames after EOF
so for the subsequent frames it consumes 1 byte producing 1 byte output. This
is mostly harmless (it is actually '\0', part of the start code), but
it produces errors:
```
[lavc hevc @ 0x68216c0055c0] missing picture in access unit with size 1
```
Possible solution would be to re-create the parser for every frame
(sic!) but it is unclear the overhead (which applies also to parsing the
frames, anyways). Anyways, as piggy-backed frames should not occur since
the commit c57f2fc5, it is perhaps best to remove this stuff altogether.
Actually all options can be passed as a c-string with av_opt_set(),
which then converts it to a correct type. So use to_string to convert
non-cstrings to std::string.
some codecs have alread default value in the description, eg. profile
- _Set the profile (default main)_, thus it is unneeded to print the
information second times.
\+ do not print float default vals with "F" suffix because it can be a
little bit misleading, indicate the type in braces instead
fixes failed run
<https://github.com/CESNET/UltraGrid/actions/runs/5925739298/job/16065753514>
From some point, it seems that Qt6 (Homebrew port _qt_) is already
installed but it doesn't bundle successfully. So enforce using Qt5
(previous symlink command didn't replace the `/usr/local/opt/qt` symlink
if already present).
\+ copy the link instead of linking it to prevent:
Error: /usr/local/opt/qt@5 is not a valid keg
This implies driver version 520 in Linux.
That driver is not avaiable on Kepler cards (1st generation supporting
NVENC), which is almost 10 years old and only basic H.264 was supported
there.
This SDK version allows acceleration of AV1 on supported cards (GeForce
40 series - Ada Lovelace).
(see also previous 2 commits)
Fixed according to [FFmpeg decode video example] - the parser needs
final call with zero long buffer to flush the frame.
\+ prefix eventual parser error with identifying prefix
[FFmpeg decode video example]:
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/decode_video.c
This reverts commit d4be2d97b5.
The fix was not correct, because it just hided the root of the problem -
the av_parser_parse2 needs final call with buf_size == 0 to flush last
frame. Otherwise it will remain in the queue, effectivelly adding a
delay by one frame, because the particular frame will get flushed when
processing next frame.
See also:
<https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/decode_video.c>
removed skipping P frames after missing P frame in VP8
This doesn't seem to be needed anymore and removing it simplifies the
code a bit. Moreover, if frame drops was the case, now frames, that
have been skipped until now will be presented, potentially reducing
twitching.
Updated sync frame API to match updated tile API as defined by the
commit e9a407ad.
Note: none of 2 compression using this API currently need reading
additional frames.
There was a comment that the frame is no longer valid, which was a bit
misleading, because the the tiles hold reference to that frame but do
not modify it in any way.
Also removed assigning NUL altogether - it was not much functional,
since it will be released soon thereafter, anyways, so it is possibly
not much needed.
Modified the API in order to fetch additional frames from compression
with iterative passing NULL pointer (similarly as for audio).
This is particularly usefull when inter-frame compression outputs 2
frames at once, which can occur when B-frames would be enabled. It,
however sometimes happen even when B-frames are disbled, eg. with
h264/hevc_mf HW encoder on AMD (AMDh265Encoder; see commit d70e2fb3c).
Please note that semantic of passing NUL frame is different in this API
to that in async API, where it works as a poison pill.
In case that the encoder buffers frame for whatever reason (NVENC
with delay, hevc_mf sometimes batching 2 frames /returned frames count
0,1,2,0,1,2.../) so that output frame doesn't match the currently
enqueued onei, metadata would be incorrect (currently enqueued copied
to non-matching dequeued one).
use AVParser for the received compressed data
This is particularly useful when encoder produces frames glued together -
it shouldn't be the case most of the times, since UG programmatically
disables B-frames but there can be some not handled encoders, notably
currently problematic _hevc_mf_ on AMD (AMDh265Encoder).
The FFmpeg native H.264 and HEVC decoders are particuraly sensitive to
passing 2 encoded frames at once, breaking the picture with errors like:
[lavc hevc @ 0x61c590004d80] Two slices reporting being the first in the same frame.
[lavc hevc @ 0x61c590004d80] Could not find ref with POC 7
or
[lavc h264 @ 0x6ee80c004d80] Frame num change from 3 to 4
[lavc h264 @ 0x6ee80c004d80] decode_slice_header error
After this fix, decoding is correct. Excess frames are dismissed but
decoding works and more importantly, user is informed what is the
problem.
Moved reading eventual forced props at the beginning. Previously, it
must have been waited until first IDR, which is unecessary, because the
custom NAL unit is appended to every frame.
Also it is not needed to read it in all compressions since it is
codec-specific and thus as implemented now, it is only applicable to
H.264/HEVC.