\+ added some direct pixfmt_conv.h includes (in attempt to remove its
inclusion from video_codec.h, which finally didn't take place but still
it is better to include this directly)
If the buffer is full, flush the output even if there is no NL at the
end and issue a warning (should be handled - either some error or some
module produces unexpectedly long output).
do not prefix messages not starting on new line by timestamps, eg.:
$ uv -s embedded -t testcard -d file:n=/dev/null -V
[1698853041.393] [lavc] Stream #0:0[1698853041.393] : Video: rawvideo, 1 reference frame (UYVY / 0x59565955), uyvy422, 1920x1080 (0x0), q=2-31, 829440 kb/s[1698853041.393] , [1698853041.393] 25 tbn[1698853041.393]
Flush the output only on NL and before it store it in internal
thread-local buffer.
Locking was removed as static data are now thread-local.
this improves 808b3de3
Set metadata to out_frame only and copy it to tmp_frame (not to repeat
every assignment and potentially forgotting something).
\+ check tmp_frame afor allocation failure
Some encoders use `colorspace` and `color_range` from AVFrame,
eg. _hevc_videotoolbox_:
uv -t testcard -c libavcodec:encoder=hevc_videotoolbox
(defaults to bgra, because other foramts don't keep 4:2:2 subsampling,
supported at the time: videotoolbox_vld nv12 yuv420p bgra p010le).
Fixed error was producing this message:
````
[lavc hevc_videotoolbox @ 0x12fc04190] Could not get pixel format for color format 'bgra' range 'unknown'.
[lavc hevc_videotoolbox @ 0x12fc04190] Error: Cannot convert format 28 color_range 0: -22
```
For following command, deduced conversion is to 10-bit YUV:
$ uv -t testcard:c=RGB -c libavcodec:enc=libx264 -d gl
[to_lavc_vid_conv] converting RGB to yuv444p10le over R12L
(and over R10k for RGBA) which is correct, because we don't have any
8-bit YUV pixfmt keeping 4:4:4 subsampling.
But this is quite ineffective because the conversions are more expensive
and we are needlessly compressing 10-bit YUV instead of 8-bit.
Thus (as we don't have any UG 8-bit YUV444 pixfmt) the rgb_to_yuv444p
conversion was added.
Check if video subsampling is 4:2:0 from sw_pix_fmt than iterating over
received pix_fmts. This is simplier and more effective since the SW
format is set the get_format() callback to the nominal SW format (if any).
Set AVHWFramesContext::sw_format to first of av_hwframe_transfer_get_formats().
This is consistent how MPV does that. Fixes NV12 being transmitted
despite AVHWFramesContext::sw_format was set to yuv420p causing chroma
channels corruption (because the nv12 data was misinterpreted as the
latter one) occuring on AMD cards, steps to reproduce:
```
uv -t testcard -c lavc:enc=libx264:safe -d gl --param use-hw-accel=vaapi
```
See also:
<66e30e7f2f>
Advertise conversion to HW-accelerated codecs (eg. HW_VDPAU, RPI4_8)
only if probe (which now works in the same way as regular init since
HEAD^) would initialize to an accelerated codec.
This would prevent situations, when eg. `--param use-hw-accel=vaapi -d
gl` is used, in which case HW_VDPAU was selected as a display codec,
although not intended.
Make working compressions other than Opus, taking sample format other
than S16 (interleaved).
Accept also S16P and FLTP (needed for AAC, MP3, Vorbis).
\+ also process stereo input (as currently only mono are accepted)
Write uncompressed output only if user explicitly specifies NUT container
to avoid unexpected results when `-d file` is writing overwhelming amount
of data.
When included prior to (Mingw-W64) windows.h, it causes compilation
fail, because "R" is used as a param name in transitively included
avx512fp16intrin.h.
Do not enforce the file to be either included before stdlib.h or stdlib.h
to be using __STDC_WANT_LIB_EXT1__ = 1. The bound checking API is
currently nowhere implemented, anyways, and we may use system native secure
qsort implementation (qsort_r in *NIX, MS variant of qsort_s).
Removed some old FFmpeg compat functions that cannot be used, since UG
won't compile with that version anyways (even newer compat was already
removed).
+ moved some function definitions from header to implementation file
(perhaps needlessly in header and it worsens readibility)
This was unnecessary compat macro, since we always build with a compiler
that understands __attribute__ except of the AJA module in MSW, which
uses MSVC compiler
As a last resort sort tie-break, codec_t value was used but the sorting
was actually descending. It is perhaps more natural to have it ascending
(although not so important).
Fixed wrong ordering in rare cases, eg.:
1. R10k->UYVY->yuv444p
2. R10k->UYVY->yuv422p
For both conversion chain, the same property 10b 4:2:2 is selected. But
as a tie-break, R10k is compared with resulting AVPixelFormat's
properties, winning #1 because seemingly keeping 4:4:4 subsampling, not
reflecting the degradation in chain.
To fix it up, use for the second comparison the minimum of src pixfmt
desc and the conversions' chain (for both are the same as mentioned in
previous paragraph).
+ improved debugging (in debug2, print comparison results & print
property of the uv->uv-only conversions)
This will allow user more control over compression if requiring a
different properties than would default sorting select.
This is mostly to avoid user specifying '--param lavc-use-codec', which
is not supported, anyways.
configure av_to_uv_conversion and swscale only once if needed
This has on one hand performance advantages. More importantely,
from_lavc_vid_conv now generates a warning on depth/subsampling
degradation. If it is the case, without this change, output would be
spammed by that warning for every decoded frame.
Steps to reproduce the fixed behavior:
uv -t testcard:codec=R10k -d dummy:codec=v210 -c libavcodec
Improved/fixed AV pixfmt comparison algorighm (get_available_pix_fmts).
If UV->UV->AV is involved, the lower bound of properties (bitrate,
subsampling) is used for the comparison. This would prevent eg.
conversion chain v210->UYVY->10b to be incorrectly treated as 10 bit
(because there is 8 bit format in the chain).
removed .id member from struct pixfmt_desc and the comparator
compare_pixfmt is not usually used directly as a comparator itself but
called from within another comparator. If 2 pixels format have the same
properties, the caller should rather decide by itself if there is some
other metrics to conside or just compare according to identity in the
end.
- renamed in_frame - now it is actually out_frame from our perspective
- same_linesizes - exit early + doxy documentation
- other docuementation improvements
changed prototype of some functions:
- to_lavc_vid_conv - accept (char *) instead of (struct video_frame)
- get_av_pixfmt_details - (enum AVPixelFormat) instead of int
+ make to_lavc_vid_conv.c partially C++ compatible (I attempted first to
include it as was it libavcodec.cpp), so leave it (just in case)
Handle conversion to codec supported input pixel format entirely in
to_lavc_vid_conv.
+ removed AVPixelFormats (both codec input and sws input) from struct
(no longer needed)
+ cleanup - set sws pointer to NULL (prevent potential double free)
YUV is always limited rante BT.709, RGB full range. Thus it doesn't need
to be in the conversion table for every one pixfmt. Also UG pixfmt was
actually useless in the prototype.
FFmpeg's Y210 and Y212 can be directly mapped to UG Y216 (have
just lower bits unused). This shouldn't, however, be given directly to
uv_to_av_pixfmts, because the mapping isn't 1:1. If it would, for
encoder, eg. the conversion Y216->Y210 will be considered as lossless
but it is reducing depth 16->10 b. So added just dummy conversions for
them.
Also removed RG48 and RGB dummy conversions to corresponding AV pixel
formats. This is the opposite case as mentioned in previous paragraph,
the correspondence is 1:1. Also the conversion/memcpy is dispatched
directly in av_to_uv_convert().