fixed crashing:
uv -t testcard:codec=R10k:size=3600x2160 -d dummy -c \
libavcode:encoder=libx265:disable_intra_refresh --param \
force-lavd-decoder=hevc_qsv,decoder-use-codec=R10k
The problem was that 3600 width rounds up to 3648 pixel for which there
was not allocated enough space (even with MAX_PADDING, which doesn't
protect against such cases).
vc_get_size() should be used instead of vc_get_linesize in all this
cases.
Do not attempt to initialize swscale if user requests some intermediate
format explicitly with `--param lavd-use-codec`. This is perhaps not
intended behavior.
Allow to use AV conversion and consequently UG pixfmt conversion.
This should avoid using falling back to swscale conversion in most of
the cases. Moreover, it can be not only a fallback behavior but also a
generailzation reducing the need to write AV conversion to plenty of UG
pixfmts.
When inter-frame format decoding starts, there may be the initial burst
that doesn't actually mean anything (at least from the point of this
warning message).
Do not use av_to_uv_conversions::native to select best ultragrid codec_t
matching AVPixelFormat.
This will allow deploy policies (to keep color space or bit depth) and
doesn't require developer to pick one <codec_t,AVPixelFormat> pair as
"privileged".
The structure is no longer part of the API and the indirection is
unneeded. On the other hand, its size could not have been possible to
query.
+ use enum AVPixelFormat in the struct instead of int
+ removed unneeded headers from module header
Decoder may attempt first pixelformat (eg. nv12) that won't be used in
the end because it changes its mind to eg. p010. _But_, the display may
have been already reconfigured to v210 which would cause swscale
conversion for nv12 (but not p010), thus it would seem to be like a
problem in output because the swscale information is shown by default
but subsequent pixel format change isn't.
1. push AV_PIX_FMT_VAAPI at the back of the preferred formats list.
This changes HEAD~5 behavior - before that that it was always first.
After that it was subject of sort - also sorted first but for all
codecs, not only those named '.*vaapi.*', which is undesirable
because currently it uses NV12 format exclusively.
Finally, this change puts AV_PIX_FMT_VAAPI at the back of the list.
It is not harmfull, however, because the vaapi-named codecs support
just this pixel format. On the other hand, it may be a fallback for
eg. hevc_qsv. In future if more SW formats are added, it could be
ordered like other formats
2. log msg that we are using vaapi (otherwise only `Selected pixfmt: nv12`
was displayed)
3. set frame pixfmt always to state_video_compress_libavcodec::selected_pixfmt.
Either swscale is not performed and then selected_pixfmt is already
AV_PIX_FMT_NV12 or if swscale is performed, its input fmt should be
selected, anyway.
Added conversions from R10k/R12L to Y416 and from Y416 to XV30, mainly
to support HEVC QuickSync. Those converisons would indirectly allow
R10k/R12L->XV30 conversion.
As it is written now, passing the AVCodec parameter was unneeded since
we are returning with appropriate function rather the list of all AV
pixels format that are possible to use without respect to actual codec.
Actually it was only used to check `vaapi` in codec name - this is not
needed, we can add vaapi to the list regardless to it - if the codec
doesn't support this, it just skips it.
Fixes eg. x2rgb10le to be displayed as supported even when blacklisted:
[lavc] Blacklisting x2rgb10le because there has been issues with this pixfmt and current encoder (hevc_qsv) , use '--param lavc-use-codec=x2rgb10le' to enforce.
[lavc] Codec supported pixel formats: nv12 p010le p012le yuyv422 y210le qsv vaapi d3d11 bgra x2rgb10le vuyx xv30le
[lavc] Supported pixel formats: gbrp12le rgb48le gbrp16le gbrp10le x2rgb10le rgb24 rgba gbrp bgr0 yuv444p12le yuv444p16le yuv444p10le yuv422p10le yuv420p10le
get_first_matching_pix_fmt() was slightly rewritten to use regular
iterator.