When deciding which signal to connect, delay the mapping to cards.
This fixes an issue where scenes that get display() once at start
and then never afterwards wouldn't get updated properly when mappings
changed in the UI.
Stretch the ease length to get back into the right cadence.
Unless the speed change is very small, we can stretch the ease a bit
(from the default 200 ms into anything in the [0,2] second range)
such that we conveniently hit an original frame. This means that if
we go into a speed such as 100% or 200%, we've got a very high
likelyhood of going into a locked cadence, with the associated
quality and performance benefits.
Whenever the speed changes, ease into it over the next 200 ms.
This causes a slight delay compared to the operator's wishes,
but that should hardly be visible, and it allows for somewhat
better behavior when we get very abrupt changes from the controller
(or the lock button suddenly is pressed) -- it's essentially
more of an interpolation. Even more importantly, it will allow us
to make a little trick to increase performance in the next patch
that would be somewhat more jerky without it.
Change Futatabi frames to be cached as textures instead of in system memory.
The JPEGs are now decoded into PBO bounce buffers, which saves a lot of CPU
time (copying is asynchronous, and done by the GPU -- plus we save a copy
into a staging buffer).
Similarly, keeping the cache in textures allows the driver (if it wants!)
to keep it in VRAM, saving repeated uploading if the same frame is used
multiple times.
CPU usage is down from 1.05 to 0.60 cores on my machine, when not playing.
More importantly, the 99-percentile player queue status is extremely much
better.
Fix MJPEG white balance information when VA-API is in use.
The JPEG headers were cached (per resolution) and never invalidated,
causing wrong/outdated information to be sent.
I can't really remember why I wanted to cache these, but I suppose
I had a reason, so I'm a bit reluctant to just kill the cache.
Hopefully, the white balance won't change often enough, and these
objects are so small, that we won't need invalidation; we can just
let it grow until program exit.
Fix a deadlock in Futatabi when using MIDI devices.
If something in the UI wanted to update a light on the MIDI device
at the exact same time the operator pressed a button on said device,
we could get a deadlock. The problem was that MIDIDevice::handle_event()
would lock the MIDIDevice and then go to tell MIDIMapper that the
note was coming (which needs a lock on the MIDIMapper), while in the
other thread, MIDIMapper::refresh_lights() would first lock the MIDIMapper
and then call MIDIDevice::update_lights() with the new set of lights
(which nedes a lock on the MIDIDevice). This is a classic lock ordering
issue.
The solution is to simply make MIDIDevice::handle_event() not lock the
MIDIDevice for anything that calls MIDIMapper; it doesn't actually modify
any state in MIDIDevice, except when we hotplug new devices (and that never
calls MIDIMapper).
Make so that auto white balance is stored per (physical) card, not per signal.
This fixes an issue where auto white balance would be completely off if
any signals were remapped out of the default. It means you no longer can
duplicate a signal and have different white balance on it, but that seems
narrow enough that one could use manual white balance for that (I can't
really imagine what the use case would be).
In Futatabi, make it possible to set custom source labels.
An operator was consistently confused by having 3 on a left camera
and 4 on a right camera, and wanted L3 and R4 labels, so here goes :-)
There's no auto-import from Nageru at this point; it needs to be set
using --source-label NUM:LABEL (or -l NUM:LABEL) from the command line.
Currently, it can get (not set) number of buses and channel names,
get/set fader volume, and get/set mute. This should already be fairly
useful for most applications.
Make it possible to siphon out a single MJPEG stream.
The URL for this is /feeds/N.mp4 (although the .mp4 suffix will be ignored
in practice), where N is the card index (starting from zero). You are allowed to
use a feed that is not part of the “regular” multicam MJPEG input.
This can be used for remote debugging, recording of a given card with wget,
streaming a specific card as an aux, and probably over things.
When there are two different video streams involved, they will often
have different white points, so we can't just echo the Exif data back;
we need to apply it when converting, and then get back a result in
standard sRGB. It's not entirely correct since we still run with
crushed whites/black, but it's good enough.
This completes Nageru/Futatabi white balance round trip support.
Support auto white balance (ie., not controlled by the theme).
This is added as a regular (optional) white balance effect, but it automatically
intercepts set_wb() calls and stores it locally. (You can still set neutral_color
manually, but it's probably sort of pointless.)
This simplifies the themes somewhat and allows in some cases for fewer different
scene instantiations (since image inputs don't get white balance), but more
importantly, it will allow exporting the white balance setting in the MJPEG
export.
Fix some jerkiness when playing back with no interpolation.
When hitting a frame exactly, we'd choose [pts_of_previous_frame, pts]
as our candidate range, and always pick the lower. This would make us
jerk around a lot, which would be a particular problem with audio.
Fix by taking a two-pronged approach: First, if we hit a frame exactly,
just send [pts,pts]. Second, activate snapping even when interpolation
is off.
We still have a problem when playing back audio in the case of dropped
video frames (we'll output the same audio twice), but better audio
handling is somewhat outside the scope of this commit.
Support disabling optional effects if a given other effect is _enabled_.
There are some restrictions (see the comments), but this is generally
useful if two effects are mutually exclusive, e.g., an overlay that can
be at one of many different points in the chain.
Make it possible for the theme to override the status line.
This is done by declaring a function format_status_line, which receives
the text that would normally be there (disk space left), as well as the
length of the current recording file in seconds. It can then return
whatever it would like.
My own code, but inspired by a C++ patch by Alex Thomazo in the Breizhcamp
repository (which did it by hardcoding a different status line in C++).
Make it possible to call set_channel_name() for live and preview.
The use case for this is if you want to copy the channel name to
preview or similar. Does not affect the legacy channel_name()
callback (it is still guaranteed never to get 0 or 1).
Probably doesn't affect the analyzer; I haven't tested.
Add disable_if_always_disabled() to Block objects.
This allows the theme to specify that a given effect only makes sense
if another effect is enabled; e.g. a crop that only makes sense if
immediately followed by a resize. This can cut down the number of
instantiations in some cases.
Also change so that 0 is no longer always the canonical choice;
if disabling a block is a possibility, that is. In situations with
things disabling each other transitively, this could reduce the
number of instantiations further.
This allows you to prune away entire sections of the chain; the typical
case is if you have an OverlayEffect(a, b) and want to disable that.
In the disabled versions of the chain, the OverlayEffect will be replaced
with an IdentityEffect that passes through a only and leaves the entire
subgraph under b noninstantiated.
For complex themes, building the multitude of chains one might need
has become very bothersome, with tricky Lua scripting and non-typesafe
multidimensional tables.
To alleviate this somewhat, we introduce a concept called Scenes.
A Scene is pretty much an EffectChain with a better name and significantly
more functionality. In particular, scenes don't consist of single Effects;
they consist of blocks, which can hold any number of alternatives for
Effects. On finalize, we will instantiate all possible variants of
EffectChains behind-the-scenes, like the Lua code used to have to do itself,
but this is transparent to the theme.
In particular, this means that inputs are much more flexible. Instead of
having to make separate chains for regular inputs, deinterlaced inputs,
video inputs and CEF inputs, you now just make an input, and can connect
any type to it runtime (or “display”, as it's now called). Output is also
flexible; by default, any scene will get both Y'CbCr and RGBA versions
compiled. (In both cases, you can make non-flexible versions to reduce
the number of different instantiations. This can be a good idea in
complex chains.)
This also does away with the concept of the prepare function for a chain;
any effect settings are snapshotted when you return from get_scene() (the new
name for get_chain(), obviously), so you don't need to worry about capturing
anything or get threading issues like you used to.
All existing themes will continue to work unmodified for the time being,
but it is strongly recommended to migrate from EffectChain to Scene.