Compared to codec options set by UltraGrid impliclty, if user-set option
fails, it is taken as an error. But the log message didn't indicate that
the option setting was actually fatal, so use error verbosity which
makes it more visual apparent.
While on x86_64 is libsvtav1 still slightly ahead, on the M1 mac, the
AOM AV1 performs significantly better (2x faster).
The above holds for native builds, it actually looks like the x86_64
build running SVT AV1 on M1 mac doesn't run correctly at all - it
produces just blank picture (green as is zeroed YCbCr buffer).
+ do not allocate it ahead in _init
For HuffYUV and FFV1 this caused crash (perhaps deconfigure was run twice
for those codecs with extradata /it uses a different path than usual
codecs without that data/, leaving the AVpacket pointer nullptr after
first run).
Unreferencing should not be necessary - we are not refcounting it and
FFmpeg example (decode_video.c), which works similarly, doesn't do that,
anyways. Also av_packet_free() should unreference it according to doc.
When compressing very small video (16x16) with libx265, first frame is
2690 B, which is more than W*H*4 (1024) leading to a crash on assert.
steps to reproduce the fixed problem:
uv -t testcard:size=16x16 -c libavcodec:encoder=libx265
Set to DEFAULT_CQP (21) instead of DEFAULT_CQP_QSV (5000). The usable
range here seem to be in tens, not thousands as for the other constant
so set it accordingly the detail to work satisfactorily when the quality
(cqp) parameter is not given explicitly.
AV_CODEC_FLAG_QSCALE is used to signalize that fixed qscale should be
used and some codecs like QSV ones require it to signalize that CQP is
used.
refer to GH-298
mjpeg encoders (FFmpeg embedded and QuickSync) don't respond to bitrate
setting, so set cqp by default. This gives user a guidance (via the log
msg, that cqp param may be set).
Setting the constant quality is quite per-codec specific so do not
provide 2 distinct options with similar semantic that are mutually
incompatible.
Instead, try to interpret the cqp parameter and set codecs' properties
individually.
Since the conversion policy is now "dsc" (depth-subs-cs) by default,
it may trigger more costly conversions so print a hint to enforce the
old behavior if not managing to keep the decode window and pixfmt change
takes at least 1/4 of overal de/compress time.
This will allow user more control over compression if requiring a
different properties than would default sorting select.
This is mostly to avoid user specifying '--param lavc-use-codec', which
is not supported, anyways.
Set params at the end of configure, otherwise subsequent
libavcodec_compress_tile() calls would think that encoder is configured,
which is not true, and it would probably crash.
changed prototype of some functions:
- to_lavc_vid_conv - accept (char *) instead of (struct video_frame)
- get_av_pixfmt_details - (enum AVPixelFormat) instead of int
+ make to_lavc_vid_conv.c partially C++ compatible (I attempted first to
include it as was it libavcodec.cpp), so leave it (just in case)
Handle conversion to codec supported input pixel format entirely in
to_lavc_vid_conv.
+ removed AVPixelFormats (both codec input and sws input) from struct
(no longer needed)
+ cleanup - set sws pointer to NULL (prevent potential double free)
- free frame parts with av_frame_free
- remove very old compat guard (LIBAVCODEC_VERSION_MAJOR < 53)
- increment AVFrame::pts for the final AVFrame
(state_video_compress_libav::in_in frame may not be passed to enc)