This enables MJPEG encoding for video sources, but in practice,
only SRT cameras are enabled. There's VA-API support for 4:2:0 only,
by means of converting to NV12; if you had 4:2:2 or 4:4:4, you will
get fallback to libjpeg, which should handle that gracefully.
(If I actually had an SRT source doing 4:2:2, I might have added
more support, but it seems pretty narrow.)
Futatabi seemingly has some problems handling these files, but that
should be fixable.
Give in the right VAAPI context when encoding MJPEG.
The context ID was typoed, and we used the config ID instead.
This seemingly worked with the older Intel drivers (they didn't care
about the context ID), but not with the newer ones.
When disconnecting a fake card and replacing it with an SRT card
(or, theoretically, vice versa), we'd delete the old frames assuming
the old pixel format, which would cause us to delete garbage data
and eventually (seemingly?) use deleted texture numbers, causing
GL errors and thus crashes.
FFmpeg can support SRT in VideoInput, but this goes beyond that;
the number of SRT inputs can be dynamic (they can fill any input
card slot), and they generally behave much more like regular input
cards than video inputs. SRT input is on by default (port 9710)
but can be disabled at runtime.
Due to licensing issues (e.g. Debian does not currently have a
suitable libsrt, as its libsrt links to OpenSSL), it is possible
to build without it.
This is video-only (no audio) and unsynchronized. It's intended mainly
for a way to pipe Nageru output into a videoconferencing application,
in these Covid-19 days.
Rename add_auto_white_balance() to add_white_balance().
“auto” white balance sounds like we are doing some kind of analysis
to find the correct white balance, which we're not, so rename before
release to reduce the chance of confusion.
When deciding which signal to connect, delay the mapping to cards.
This fixes an issue where scenes that get display() once at start
and then never afterwards wouldn't get updated properly when mappings
changed in the UI.
Stretch the ease length to get back into the right cadence.
Unless the speed change is very small, we can stretch the ease a bit
(from the default 200 ms into anything in the [0,2] second range)
such that we conveniently hit an original frame. This means that if
we go into a speed such as 100% or 200%, we've got a very high
likelyhood of going into a locked cadence, with the associated
quality and performance benefits.
Whenever the speed changes, ease into it over the next 200 ms.
This causes a slight delay compared to the operator's wishes,
but that should hardly be visible, and it allows for somewhat
better behavior when we get very abrupt changes from the controller
(or the lock button suddenly is pressed) -- it's essentially
more of an interpolation. Even more importantly, it will allow us
to make a little trick to increase performance in the next patch
that would be somewhat more jerky without it.
Change Futatabi frames to be cached as textures instead of in system memory.
The JPEGs are now decoded into PBO bounce buffers, which saves a lot of CPU
time (copying is asynchronous, and done by the GPU -- plus we save a copy
into a staging buffer).
Similarly, keeping the cache in textures allows the driver (if it wants!)
to keep it in VRAM, saving repeated uploading if the same frame is used
multiple times.
CPU usage is down from 1.05 to 0.60 cores on my machine, when not playing.
More importantly, the 99-percentile player queue status is extremely much
better.
Fix MJPEG white balance information when VA-API is in use.
The JPEG headers were cached (per resolution) and never invalidated,
causing wrong/outdated information to be sent.
I can't really remember why I wanted to cache these, but I suppose
I had a reason, so I'm a bit reluctant to just kill the cache.
Hopefully, the white balance won't change often enough, and these
objects are so small, that we won't need invalidation; we can just
let it grow until program exit.
Fix a deadlock in Futatabi when using MIDI devices.
If something in the UI wanted to update a light on the MIDI device
at the exact same time the operator pressed a button on said device,
we could get a deadlock. The problem was that MIDIDevice::handle_event()
would lock the MIDIDevice and then go to tell MIDIMapper that the
note was coming (which needs a lock on the MIDIMapper), while in the
other thread, MIDIMapper::refresh_lights() would first lock the MIDIMapper
and then call MIDIDevice::update_lights() with the new set of lights
(which nedes a lock on the MIDIDevice). This is a classic lock ordering
issue.
The solution is to simply make MIDIDevice::handle_event() not lock the
MIDIDevice for anything that calls MIDIMapper; it doesn't actually modify
any state in MIDIDevice, except when we hotplug new devices (and that never
calls MIDIMapper).
Make so that auto white balance is stored per (physical) card, not per signal.
This fixes an issue where auto white balance would be completely off if
any signals were remapped out of the default. It means you no longer can
duplicate a signal and have different white balance on it, but that seems
narrow enough that one could use manual white balance for that (I can't
really imagine what the use case would be).
In Futatabi, make it possible to set custom source labels.
An operator was consistently confused by having 3 on a left camera
and 4 on a right camera, and wanted L3 and R4 labels, so here goes :-)
There's no auto-import from Nageru at this point; it needs to be set
using --source-label NUM:LABEL (or -l NUM:LABEL) from the command line.
Currently, it can get (not set) number of buses and channel names,
get/set fader volume, and get/set mute. This should already be fairly
useful for most applications.
Make it possible to siphon out a single MJPEG stream.
The URL for this is /feeds/N.mp4 (although the .mp4 suffix will be ignored
in practice), where N is the card index (starting from zero). You are allowed to
use a feed that is not part of the “regular” multicam MJPEG input.
This can be used for remote debugging, recording of a given card with wget,
streaming a specific card as an aux, and probably over things.
When there are two different video streams involved, they will often
have different white points, so we can't just echo the Exif data back;
we need to apply it when converting, and then get back a result in
standard sRGB. It's not entirely correct since we still run with
crushed whites/black, but it's good enough.
This completes Nageru/Futatabi white balance round trip support.
Support auto white balance (ie., not controlled by the theme).
This is added as a regular (optional) white balance effect, but it automatically
intercepts set_wb() calls and stores it locally. (You can still set neutral_color
manually, but it's probably sort of pointless.)
This simplifies the themes somewhat and allows in some cases for fewer different
scene instantiations (since image inputs don't get white balance), but more
importantly, it will allow exporting the white balance setting in the MJPEG
export.
Fix some jerkiness when playing back with no interpolation.
When hitting a frame exactly, we'd choose [pts_of_previous_frame, pts]
as our candidate range, and always pick the lower. This would make us
jerk around a lot, which would be a particular problem with audio.
Fix by taking a two-pronged approach: First, if we hit a frame exactly,
just send [pts,pts]. Second, activate snapping even when interpolation
is off.
We still have a problem when playing back audio in the case of dropped
video frames (we'll output the same audio twice), but better audio
handling is somewhat outside the scope of this commit.
Support disabling optional effects if a given other effect is _enabled_.
There are some restrictions (see the comments), but this is generally
useful if two effects are mutually exclusive, e.g., an overlay that can
be at one of many different points in the chain.
Make it possible for the theme to override the status line.
This is done by declaring a function format_status_line, which receives
the text that would normally be there (disk space left), as well as the
length of the current recording file in seconds. It can then return
whatever it would like.
My own code, but inspired by a C++ patch by Alex Thomazo in the Breizhcamp
repository (which did it by hardcoding a different status line in C++).
Make it possible to call set_channel_name() for live and preview.
The use case for this is if you want to copy the channel name to
preview or similar. Does not affect the legacy channel_name()
callback (it is still guaranteed never to get 0 or 1).
Probably doesn't affect the analyzer; I haven't tested.
Add disable_if_always_disabled() to Block objects.
This allows the theme to specify that a given effect only makes sense
if another effect is enabled; e.g. a crop that only makes sense if
immediately followed by a resize. This can cut down the number of
instantiations in some cases.
Also change so that 0 is no longer always the canonical choice;
if disabling a block is a possibility, that is. In situations with
things disabling each other transitively, this could reduce the
number of instantiations further.