This is useful for machines that don't have Quick Sync,
but where you want to have an archival copy on disk
in higher quality than what you streamed out.
This is useful primarily if you want Kaeru to rewrap the stream into
Metacube (for cubemap) and do nothing else with it. Only H.264
is supported for now, since everything else assumes that.
Currently, we only really support --http-mux=mpegts; other muxes seem
to have issues.
This is seemingly especially important when we have input format autodetect
and PAL input rates; it gives us one 59.97 fps frame, then a delay as the
card autodetects and resyncs (might be as much as 30–50 ms; not entirely sure),
and then a steady stream of 50 fps frames. This then causes us to overestimate
the jitter by a lot until we get more than 1000 frames and can reject that
very first event as the outlier it is.
Newer CEF hard-codes that icudtl.dat _must_ be in the same directory
as libcef.so, but the release tarballs have a different structure,
so this fails. This should be fine on installs, but it won't work
for running Nageru from the build directory. Search for the library
in the local directory instead, where we have a symlink. This makes
the build rpath point to ., which makes sure icudtl.dat is picked up
from the other symlink.
This is more relevant now that having multiple SRT cameras can lead to
decoding lots of videos at the same time. It would be possible to support
other mechanisms (e.g. VDPAU) in FFmpeg depending on what FFmpeg is
built against, but it's a bit cumbersome to do, so this is VA-API only for now.
When hot-unplugging capture cards, actually allow making them inactive.
This was an oversight; removed cards would always be replaced by fake ones.
However, this exposed a few tricky issues, like that the master card can
go away, that needed to be dealt with.
5 fps is pretty crappy, but evidently, 100 ms happens with SRT cards
all the time even when nothing is wrong. Perhaps a situation with
B-frames at 30 fps? I haven't really checked.
This is both useful for discoveirng the feature, and also should you be
on a hostile network where suddenly someone connects to you when you
don't want to. Existing connections will remain just fine.
This is a cleanup that should really have been done when we added
hotplug in the first place, but it's becoming even more relevant
now that SRT “cards” are supported.
Basically, empty slots can now be filled with nothing instead of
fake capture cards (which generate frames and take a little bit
of CPU time); we only instantiate fake capture cards if the slot is
below some certain minimum index or has been used by the theme.
(Cards that are unused are now “inactive” and will generally not
show up in the UI.) This means that the --num-cards parameter is now
largely irrelevant; it is only there for guaranteeing a minimum
amount of fake cards, for testing. Most users will be happy just using the
default of 2. There's also a new --max-num-cards in the unlikely case
that you want to leave some cards unused, e.g. for other applications
on the same machine.
This also unifies handling of regular capture cards, FFmpeg “cards”
and CEF “cards”; they are indexed pretty much the same way.
We initialized VA-API and enumerated configs etc. in three different
ways for H.264 encoding (Nageru), MJPEG encoding (Nageru) and MJPEG
decoding (Futatabi). Unify them into one shared function, to reduce
the amount of duplication.
This enables MJPEG encoding for video sources, but in practice,
only SRT cameras are enabled. There's VA-API support for 4:2:0 only,
by means of converting to NV12; if you had 4:2:2 or 4:4:4, you will
get fallback to libjpeg, which should handle that gracefully.
(If I actually had an SRT source doing 4:2:2, I might have added
more support, but it seems pretty narrow.)
Futatabi seemingly has some problems handling these files, but that
should be fixable.
Give in the right VAAPI context when encoding MJPEG.
The context ID was typoed, and we used the config ID instead.
This seemingly worked with the older Intel drivers (they didn't care
about the context ID), but not with the newer ones.
When disconnecting a fake card and replacing it with an SRT card
(or, theoretically, vice versa), we'd delete the old frames assuming
the old pixel format, which would cause us to delete garbage data
and eventually (seemingly?) use deleted texture numbers, causing
GL errors and thus crashes.
FFmpeg can support SRT in VideoInput, but this goes beyond that;
the number of SRT inputs can be dynamic (they can fill any input
card slot), and they generally behave much more like regular input
cards than video inputs. SRT input is on by default (port 9710)
but can be disabled at runtime.
Due to licensing issues (e.g. Debian does not currently have a
suitable libsrt, as its libsrt links to OpenSSL), it is possible
to build without it.
This is video-only (no audio) and unsynchronized. It's intended mainly
for a way to pipe Nageru output into a videoconferencing application,
in these Covid-19 days.
Rename add_auto_white_balance() to add_white_balance().
“auto” white balance sounds like we are doing some kind of analysis
to find the correct white balance, which we're not, so rename before
release to reduce the chance of confusion.
When deciding which signal to connect, delay the mapping to cards.
This fixes an issue where scenes that get display() once at start
and then never afterwards wouldn't get updated properly when mappings
changed in the UI.
Stretch the ease length to get back into the right cadence.
Unless the speed change is very small, we can stretch the ease a bit
(from the default 200 ms into anything in the [0,2] second range)
such that we conveniently hit an original frame. This means that if
we go into a speed such as 100% or 200%, we've got a very high
likelyhood of going into a locked cadence, with the associated
quality and performance benefits.
Whenever the speed changes, ease into it over the next 200 ms.
This causes a slight delay compared to the operator's wishes,
but that should hardly be visible, and it allows for somewhat
better behavior when we get very abrupt changes from the controller
(or the lock button suddenly is pressed) -- it's essentially
more of an interpolation. Even more importantly, it will allow us
to make a little trick to increase performance in the next patch
that would be somewhat more jerky without it.
Change Futatabi frames to be cached as textures instead of in system memory.
The JPEGs are now decoded into PBO bounce buffers, which saves a lot of CPU
time (copying is asynchronous, and done by the GPU -- plus we save a copy
into a staging buffer).
Similarly, keeping the cache in textures allows the driver (if it wants!)
to keep it in VRAM, saving repeated uploading if the same frame is used
multiple times.
CPU usage is down from 1.05 to 0.60 cores on my machine, when not playing.
More importantly, the 99-percentile player queue status is extremely much
better.