It is not fully-conformant but the wiki description is rather vague and
it is eg. inpossible to represent a "superblack" color if we assume 0 as
a base black.
- show usage with "help"
- print error if setvbuf fails (but do not return an error - there may
be some platform problem but the setting is done always, so it would
unconditionally prevent UltraGrid from running)
- if usage error (or help) occurs, exit UltraGrid
setvbuf() should be called on a stream prior to any operation with the
stream. Previously it was to late -- it was even after configuration
summary was printed to stdout.
The value 0 is mentioned in man setvbuf(3) and indeed glibc
implements setlinebuf as:
_IO_setvbuf (stream, NULL, 1, 0);
However, except the mention in manual page, this extension doesn't seem
to be anywhere mentioned (not clear if valid with _IOFBF) and it is not
widespread except glibc, since [1] forbids that. Also using _IOLBF+0
behaves in the same way as _IOLBF+BUFSIZ (buffers BUFSIZ bytes).
[1] https://www.ibm.com/docs/en/i/7.1?topic=functions-setvbuf-control-buffering
Replace the original factor of 8 with 2.5. As a drawback, the resulting
stream is 13% smaller than requested bitrate, but maximal frame is
indeed at most 2.5x bigger than the average (measured on NewZealand
short).
This used to cause some artifacts if smaller than 8 but doesn't seem to
be true anymore. Anyways, this commit also adds param
`lavc-rc-buffer-size-factor` in case there is a need to increase the
factor.
Impact on PSNR (NZ@20 Mbps) -- previous: avg. 52.27, new 49.95 (but has
smaller actual bitrate; using the same bitrate yields 51.36).
Do not iconify when windows is fullscreen and loses focus -- this is
particularly annoying when using 2 displays (but can have a rationale
in a single display setup when platform has full-screen windows always
on top).
Altough in theory it may be beneficial to limit NAL unit sizes to
approximately the size of packet and indeed in synthetic tests it
was, real world use seems to exhibit the opposite - potential artifacts
if exceeding 32 slices (some huge picture). Also the resilliency was
somhow worse.
User still may pass the parameter explicitly to see if the behavior
improves or worsens: "-t libavcodec:encoder=libx264:slice-max-size=1200"
(Main motivation was to have the video capture visually more apparent if
having cap+disp+audio. This is now solved by the generic indicator so
use it.)
Point clog to cout instead of default cerr. Unit of log is almost always
a line so it is pointless to flush it after each write (as stderr is set
by default not to buffer /see previous commit/) -- those would make
every '<<' operator a flush point (or in C code multiple calls of
console_out is done).
Set output buffering to "line" for stdout and "no" for stderr. This is
the case usually but not always (eg. MSYS, GUI console) so make this
explicit to be deterministic.
If window has approximately the same or higher resolution than the
screen resolution, reverting back from fullscreen caused the window
fail to re-gain its windowed state, reverting to full-screen.
To avoid that glfwSetWindowMonitor() needs to be called with {x,y}pos
set to GLFW_DONT_CARE instead of 0.
The control flow was a bit incorrect - pause should be handled prior to
current_frame update.
Anyways, the whole stuff is still a bit ugly (namely the pop_frame stuff
+ locking). Delaying frame pop was a kind of optimization - not sure if
it is worth it.
This slightly modifies commit c25a362c that brought a regression if
sending&receiving video but no audio -- TX port was set to base (or +2)
but RX was kept 0 resulting port inequality and thus Jumbo frames were
not chosen.
List of modes may be sometimes excessive, especially whenever there are
multiple devices. But user may usually wish to select the size manually
with V4L2 so keep the modes visible by default but allow hidding it (eg.
to obtain solely list of devices).
Verify if captured format matches the format requested explicitly by
user.
+ print capture format properties only once (was duplicite)
+ removed needless VIDIOC_G_FMT -- VIDIOC_S_FMT already adjusts the
format so this is unneeded and may be confusing