Use UYVY as a fallback intermediate if no eligible pixfmt is reported.
This could eg. happen if no conversion fullfill constraint (eg.
currently R10k to anything WRT subsampling 4:2:0).
For a reason, this sometime fail resulting to following error:
[DeckLink capture] set_display_mode_properties: out of memory
+ release_bmd_api_str: NOOP if nullptr passed (can be now the case since
the code is more permissive)
In the second iteration of for cycle, $NAME was derived from something
like "/usr/lib/libva.so.2:/usr/lib/libva-drm.so.2" (s/libva/libva-x11),
which obviously didn't exist as a file.
Fixes:
$ <AppImage> --list-modules | grep -A 2 'Errors:'
Errors:
ultragrid_acompress_libavcodec.so
./squashfs-root2/usr/lib/libva-x11.so.2: undefined symbol: va_fool_postp
Added a rate limiter that occasionally allows excessive frames.
It permits using 1.5x frame time for frame 2x bigger that moving
average if 4 normal frames (using .75x frame time) were emitted
inbetween.
This mode is now default (for video, audio doesn't use rate limiter).
The parsing was done early before dynamic modules that export own
parameters were loaded.
Now, the parsing is done in 2 steps -- first scan only for already known
parameters (output buffer settings is needed for preinit), ignore
"help".
The second step is to do the full parse (in main), when there are all
modules loaded (latter in common_preinit()).
Fixes#237.
The regression was introduced by 1dc89920.
It is not fully-conformant but the wiki description is rather vague and
it is eg. inpossible to represent a "superblack" color if we assume 0 as
a base black.
- show usage with "help"
- print error if setvbuf fails (but do not return an error - there may
be some platform problem but the setting is done always, so it would
unconditionally prevent UltraGrid from running)
- if usage error (or help) occurs, exit UltraGrid
setvbuf() should be called on a stream prior to any operation with the
stream. Previously it was to late -- it was even after configuration
summary was printed to stdout.
The value 0 is mentioned in man setvbuf(3) and indeed glibc
implements setlinebuf as:
_IO_setvbuf (stream, NULL, 1, 0);
However, except the mention in manual page, this extension doesn't seem
to be anywhere mentioned (not clear if valid with _IOFBF) and it is not
widespread except glibc, since [1] forbids that. Also using _IOLBF+0
behaves in the same way as _IOLBF+BUFSIZ (buffers BUFSIZ bytes).
[1] https://www.ibm.com/docs/en/i/7.1?topic=functions-setvbuf-control-buffering
Replace the original factor of 8 with 2.5. As a drawback, the resulting
stream is 13% smaller than requested bitrate, but maximal frame is
indeed at most 2.5x bigger than the average (measured on NewZealand
short).
This used to cause some artifacts if smaller than 8 but doesn't seem to
be true anymore. Anyways, this commit also adds param
`lavc-rc-buffer-size-factor` in case there is a need to increase the
factor.
Impact on PSNR (NZ@20 Mbps) -- previous: avg. 52.27, new 49.95 (but has
smaller actual bitrate; using the same bitrate yields 51.36).
Do not iconify when windows is fullscreen and loses focus -- this is
particularly annoying when using 2 displays (but can have a rationale
in a single display setup when platform has full-screen windows always
on top).
Altough in theory it may be beneficial to limit NAL unit sizes to
approximately the size of packet and indeed in synthetic tests it
was, real world use seems to exhibit the opposite - potential artifacts
if exceeding 32 slices (some huge picture). Also the resilliency was
somhow worse.
User still may pass the parameter explicitly to see if the behavior
improves or worsens: "-t libavcodec:encoder=libx264:slice-max-size=1200"
(Main motivation was to have the video capture visually more apparent if
having cap+disp+audio. This is now solved by the generic indicator so
use it.)
Point clog to cout instead of default cerr. Unit of log is almost always
a line so it is pointless to flush it after each write (as stderr is set
by default not to buffer /see previous commit/) -- those would make
every '<<' operator a flush point (or in C code multiple calls of
console_out is done).
Set output buffering to "line" for stdout and "no" for stderr. This is
the case usually but not always (eg. MSYS, GUI console) so make this
explicit to be deterministic.