These have seemingly been no-ops for a long time (we already called
avcodec_context_free(), which also closes), but with FFmpeg 7.0,
they throw explicit deprecation notices.
Clang rightfully pointed out that VLAs are a non-standard extension.
We can do just fine with three small heap allocations per software-decoded
JPEG (the main path is VA-API anyway).
If we need a larger frame size than FRAME_SIZE for a frame,
simply reallocate it; we're allowed to sleep in this path.
This finally allows us to accept 2160p streams (assuming your
GPU is fast enough, of course), at the cost of having to make
an OpenGL context for each decoder thread (because allocating
frames in GPU memory requires making a PBO and stuff).
Unfortunately, we still cannot decode 2160p DeckLink frames
because there's no OpenGL context there, but it's a good start.
Decouple the frame creation (which is now a static function)
from the act of pushing it on the freelist. This will make it easier
to resize the frame dynamically later.
Requiring click-to-play for videos doesn't make sense in the
CEF context; we don't even support interactivity, and most pages
are going to be from trusted sources anyway. (Besides, any muted
videos already autoplay just fine, and our CEF source is always
muted.)
Use AVFrame::duration instead of AVFrame::pkt_duration.
This field was introduced in FFmpeg 5.2 and then the old one
promptly deprecated; the difference seems to be effectively nil?
Use an #ifdef so that we retain bookworm (FFmpeg 5.1) compatibility.
Work around a bogus compilation warning (it claimed the generated copy constructor of PendingDecode could be using fade_alpha uninitialized during push_back).
It's a shame that we now need to call malloc for each and every
allocation of a common 104-byte struct, but evidently, that's the way
FFmpeg wants it. So move to heap allocation everywhere, silencing
a barrage of deprecation warnings during build.
When muxing in the background, write the header in the background, too.
This is especially important with SRT output, which can hang pretty much
forever on connect. Note that we still buffer forever (which we probably
shouldn't), and we don't exit cleanly if SRT is not connected.
This is useful for push, and for bad networks (e.g. 4G).
You can in theory push to another Nageru instance, but the most
logical would either be to a Cubemap (running FFmpeg to demux,
unfortunately), or to something like YouTube, which is now working
on SRT ingest.
Note that for YouTube SRT ingest to work, someone from YouTube needs to
set a special flag on your account for now.
Specify font family explicitly for the stream timecode.
Seemingly the default is now a font where not all digits are equally wide,
and that makes the text jump constantly back and forth. Noto Sans should
be pretty widely supported, and if not, we'll probably fall back to
whatever we had before.
Improve selection of software formats on hwaccel fallback.
We assumed the first format was a software format, whereas in practice,
it would now seemingly be vdpau or cuda, causing multiple trips
through the selection function.
Fix crashes when the master clock goes faster than 60 Hz.
This could happen in particularly when using a video as the master card;
e.g. Larix Broadcaster sometimes sends SRT with 60.06 fps or similar.
We solve it by completely rewriting how DTS is calculated when doing
Quick Sync encoding, which ended up being much simpler than what we
had before (and probably a lot more common)
This essentially removes the meaning of MAX_FPS; we could now easily
do e.g. 144 fps if needed.
CBR isn't really ready yet; it requires low-delay mode, which limits
SVT-AV1 to three cores and also is pretty bad for quality in general.
(Also, its CBR isn't really CBR yet; see SVT-AV1 bug 1959.)
Anything less seemingly triggers a bug in the ALSA PulseAudio
plugin when used against PipeWire (audio just never starts playing).
It would be nice to generally have less latency here, but with the
current design based on large resampling queues, this isn't going
to be what causes it.
This was mostly so that people could sharpen the input if they wanted to
(even though unsharp mask is not the best sharpener). BlurEffect was added
mainly because it felt wrong that one could only use a compound effect and not
the underlying one.