FFmpeg returns always bt470b, even with GPUJPEG SPIFF JPEGs (only the
color_range can be either tv or pc depending if sender sets range as
JPEG or MPEG - we're setting MPEG for YCbCr always).
Also alter the JPEG message.
The same assignment is later in the get_av_pixfmt_details() call (this
is the relict from the times when function was implemented differently
- 23ca8f37).
It can be used in place of other network-related headers, not just for
htonl and family.
+ compat for fd_t and INVALID_SOCKET (that has been in config_*.h)
GCC seems to need it (now). Note that inner loop with #pragma unroll
doesn't seem to work.
Seems to slightly improve the performance on AMD Ryzen 9 7900X, otherwhere
it seems to be the same. Combining 2 or 4 items in one iteration gives
similar performance so picking 2; 8 is significantly worse.
Start rewrite with coefficients not hard-coded in the macro. For the
beginning, the new implementation used in pixfmt_conv.o. From
the performance evaluation it doesn't have impact on performance
(`tools/convert benchmark`).
The [document] referenced in the header is far from being strict in this
respect. The values that were clamed to was Nominal Video Range. At
the same time, the Preferred Min./Max. is significantly lower/higher
(16-235 vs 5-246 for 8 bit). This value can be understood as a "soft"
limit while the Total Video Signal Range (1-254) as a hard limit.
Some decoders (FFmpeg HEVC) overshot the nominal values, anyways.
[document]:
https://tech.ebu.ch/docs/r/r103.pdf
Mainly depth is included in Y_ and CBCR_LIMIT - the used denominator
255.0 matched only 8 bits.
Add (substract) epsilon 0.5 when converting the to integer to round the
value correctly.
- print the buffer size with more human-readable SI-prefix. The value is
repeated later twice, anyways.
- print correctly sysctl item in macOS (was net.core.rmem_max in text,
although later correctly in the command net.inet.udp.recvspace)
Also use positional parameters for printf (more readable here). Early
return for _WIN32.
WORDS_BIGENDIAN is defiend by config.h
Use __BYTE_ORDER__ defined by GNU compilers (POSIX 2024 further defines
endian.h header but not yet in macOS /15/),
Improve MSG() in a way that LOG() is - check the log_level first and if
not printed, just skip. Previously the eventual arguments were evaluated
and also log_msg() was called (althoug exitted immediately).
Instead of just aborting on assert for `-t testcard:c=DVS10` print at
least a user-friendly message.
+ in debug verbosity print the same message in get_decoder_from_to() if
returning NULL (to be used in similar cases when the returned NULL is
poorly handled)
not through config*.h
+ use __BYTE_ORDER__ (defined by GNU compiler) instead of
WORDS_BIGENDIAN. POSIX 2024 standardizes endiah.h but not yet present
on macOSes.
Added unintentionally with IWYW but the older versions of
libavcoded didn't have the header (and had it included directly in
libavcodec/avcodec.h that is included later).
Until now, in verbose mode was printed the dynamically selected port
pair. But the knowledge should be useful also to ensure the ports where
it doesn't need to be clear, eg. in server mode or for receiver to see
that 5004 is indeed bound.
It didn't seem that R10k worked for anyone before 18th Sep (commit
3c9e2602) because bit shuffling is needed (r10k_to_sdl2()).
Also the negative meaning (=no), which has been added just in the last
week commit can be replaced with '--param decoder-use-codec='!R10k'`.
But set the padding as the maximum of AV_INPUT_BUFFER_PADDING_SIZE
(if there is include) and MAX_PADDING (both are currently 64).
Use __has_include() instead of HAVE_LIBAVCODEC_H.
Create inner loop with fixed amount of iterations (16). This will allow
the compiler to unroll the inner loop and vectorize (16 iterations per
4 bytes is 512b allowing up to 512b instructions).
The eventual rest (%16 != 0) is computed per pixel as it used to be..