The concept of fast/slow decoders was obsolete and not a supported way -
the preferred way nowadays is to use the generic comparator and
alternatively let user selecting a policy. Also, it was error prone
whether or not mark the conversion as slow or fast. Also the order in
the array was important.
Subsequently replaced get_fastest_decoder_from() with
get_best_decoder_from() calls - in the occurences where it was used,
both calls will likely to end up with the same result, anyways
(candidate set is usually something like RGB,RGBA and UYVY).
As a last resort sort tie-break, codec_t value was used but the sorting
was actually descending. It is perhaps more natural to have it ascending
(although not so important).
The handling of source::sr is MT-Unsafe when the library is used from
within 2 threads. Typically when running `uv -t testcard -d dummy` (with
-DDEBUG), check_database() is run from sender thread while process_rtcp_sr
from receiver thread leading to a crash on line 589 using the old
structure data that has been freed and overwritten by some new data.
The above mentioned crash doesn't usually occur since check_database()
is run only if DEBUG is defined. However, in theory it may happen that
both threads accidentaly run process_rtcp_sr() (alhough not observed). In
that case either double-free or a leak could occur if the runs of
process_rtcp_sr() is interleaved in a wrong way. Unfortunately, as the
RTP library is originally mt-unsafe, there can be plenty of similar
undiscovered problems.
(Note: to test the check_database() crash, running in GDB seems to
increase the likelihood that it crashes. This change on the other hand
seems to retuce that, but clearly doesn't eliminate that because the
content of the struct is read while it may be freed by the other thread.)
Begin decoding of HEVC stream after a frame starting with VPS NALU has
been seen. This improves the behavior in a sense that there is not an
initial flood of decoding errors until first IDR frame is received.
This change is analogous to the one that is already present for H.264
(using SPS NALU).
In case of problems, check if it holds that VPS NAL is always first
(seems to be the case but not sure if it is mandatory; but similarly
this presumption holds for H.264 and there has not be any counterexample
until now).
It looks like with the current version of libx264 (164 r3095) the first
NALU in non-IDR frames is SEI, so it effectively skipped the workaround.
As verified, both libx264, nvenc and QSV (on ALD-P) produce SPS first,
so we can ignore SEI-beginning frames as non-IDR.
suppress missing-field-initializers warning when assigning
AVChannelLayout to AVCodecContext::ch_layout. This is C++ specific, the
construction is entirely fine in C (empty-initializes remaining
members).
Rewritten to C - it seems to be a bit invasive, because the rewrite is
quite huge. On the other hand, it cleans the code a bit and also
removes some inefficiencies that there have been (now not possible
because the absence of RAII).
Some of devices cannot be used in either input or output (or none)
direction. Those are usually hidden but are shown if in verbose mode.
If so, use different color to highlight that those cannot be used.
do not assume that given configuration string may be a NULL-pointer
- [cap] rather zero-initialize the struct (doesn't seem to be a problem
now but it is more convenient to have the value somehow defined)
- [cap] removed some misleading comment (probably even from the times
when PortAudio was not modularized and was part of audio conglomerate
module)
Maximal number of channels was chosen instead of
DEFAULT_AUDIO_CAPTURE_CHANNELS. This was noticable especially for
Pulseaudio plugin, which has 64 channels.
fixes commit 89747981
- use LOG_LEVEL_NOTICE - when using default device, this information may
be quite important
- fixed spacing (missing spaces because it was a bit tricky when printed
in multiple steps)
Use pixfmt_desc instead of codec_t for internal compression
representation. This better alignes eg. YUV 10-bit 4:4:4 which has been
deduced as Y416 and eg. for DeckLink, R12L was chosen because it was
thought to be 16 bit, not 10.
This fixes:
uv -t testcard:codec=R10k -c libavcodec:encoder=libx265:yuv -d dummy:codec=decklink
being detected internally as Y416 and configured as R12L. Now it is
internally Y444_10 and output DeckLink would be configured to R10k.
Removed also params "lavd-use-10bit", "lavd-use-codec", which were
deprecated already some time and if can be replaced by
"decoder-use-codec" if needed.
Using short getopt options, a user may make a mistake in parameter name
resulting in passing a non-numeric string to stoi, which leads to crash
on not-nice uncaught exception crash.
+ remove try/catch from parse_port (no longer needed due the above)
AV_CODEC_FLAG_QSCALE is used to signalize that fixed qscale should be
used and some codecs like QSV ones require it to signalize that CQP is
used.
refer to GH-298
mjpeg encoders (FFmpeg embedded and QuickSync) don't respond to bitrate
setting, so set cqp by default. This gives user a guidance (via the log
msg, that cqp param may be set).
Setting the constant quality is quite per-codec specific so do not
provide 2 distinct options with similar semantic that are mutually
incompatible.
Instead, try to interpret the cqp parameter and set codecs' properties
individually.