4 **NOTE**: Nageru 1.9.0 made significant improvements to themes
5 and how scenes work. If you use an older version, you may want
6 to look at `the 1.8.6 documentation <https://nageru.sesse.net/doc-1.8.6/>`_;
7 themes written for older versions still work without modification in
8 1.9.0, but are not documented here, and you are advised to change
9 to use the new interfaces, as they are equally powerful and much simpler
12 In Nageru, most of the business logic around how your stream
13 ends up looking is governed by the **theme**, much like how a
14 theme works on a blog or a CMS. Most importantly, the theme
15 governs the look and feel through the different scenes and
16 transitions between them, such that an event gets a consistent
17 visual language even if the operators differ. Instead of requiring the user
18 to modify Nageru's C++ core, themes are written in
19 `Lua <https://www.lua.org/>`_, a lightweight scripting language
22 Themes contain a lot of logic, and writing one can seem a bit
23 daunting at first. However, most events will be happy just tweaking
24 one of the themes included with Nageru, and the operator (if different
25 from the visual designer) will not need to worry at all.
27 Nageru ships with two themes, a default full-featured two-camera
28 setup with side-by-side for e.g. conferences, and a minimal one
29 that is easier for new users to understand.
32 Introduction to scenes
33 ----------------------
35 Anything that's shown on the stream, or on one of the preview displays,
36 is created by a **Movit chain**, instantiated by a Nageru **scene**.
37 `Movit <https://movit.sesse.net/>`_
38 is a library for high-quality, high-performance video filters,
39 and Nageru's themes can use a simplified version of Movit's API where
40 most of the low-level details are abstracted away.
42 Every frame, the theme chooses a **scene** and a set of parameters to it,
43 based on what it thinks the picture should look like. Every scene
44 consists of a set of *inputs* (which can be either live video streams
45 or static pictures) and then a set of operators or *effects* to combine
46 or modify each other. Movit compiles these down to a set of shaders
47 that run in high speed on the GPU; the theme doesn't see a pixel,
48 and thus, Lua's performance (even though good for its language class,
49 especially if you use `LuaJIT <http://luajit.org/>`_) will not matter
56 The simplest possible scene takes only in an input and sends it on
57 to the display (the output of the last added node is always sent to
58 the screen, and in this case, that would be the input)::
60 local scene = Scene.new(16, 9) -- Aspect ratio.
61 local input = scene:add_input()
62 input:display(0) -- First input card. Can be changed whenever you want.
65 The live scene is always processed in full resolution (typically 720p)
66 and then scaled down for the GUI. Preview scenes are rendered in exactly
67 the resolution required, although of course, intermediate steps could be
71 Setting parameters, and the get_scene entry point
72 -------------------------------------------------
74 Many effects support parameters that can vary per-frame. Imagine,
75 for instance, a theme where you want to supports two inputs and fading between
76 them. This means you will need a scene that produces two inputs and
77 produces a mix of them; Movit's *MixEffect* is exactly what you want here::
79 local scene = EffectChain.new(16, 9)
81 local input0 = scene:add_input()
83 local input1 = scene:add_input()
86 local mix_effect = scene:add_effect(MixEffect.new(), input0, input1)
89 Every frame, Movit will call your **get_scene** function, which has
92 function get_scene(num, t, width, height, signals)
94 “width” and “height” are what you'd expect (the output resolution).
95 t contains the current stream time in seconds. “num” contains 0
96 for the live view, 1 for the preview view, and 2, 3, 4, … for each
97 of the individual stream previews. “signals” contains a bit of
98 information about each input signal (see :ref:`signal-info`).
100 get_scene in return should return a scene. However, before you do that,
101 you can set the parameters **strength_first** and **strength_second**
102 on it; for instance like this::
104 function get_scene(num, t, width, height, signals)
105 -- Assume num is 0 here; you will need to handle the other
107 local fade_progress = 0.0
108 if t >= 1.0 and t >= 2.0: -- Between 1 and 2 seconds; do the fade.
109 fade_progress = t - 1.0
114 mix_effect:set_float("strength_first", 1.0 - fade_progress)
115 mix_effect:set_float("strength_second", fade_progress)
119 Note that in the case where fade_progress is 0.0 or 1.0 (you are just
120 showing one of the inputs), you are wasting GPU power by using the
121 fade scene; you should just return a simpler one-input scene instead.
123 The get_scene function is the backbone of every Nageru theme.
124 As we shall see, however, it may end up dealing with a fair bit
125 of complexity as the theme grows.
128 Scene variants and effect alternatives
129 --------------------------------------
131 Setting up and finalizing a scene is relatively fast, but it still
132 takes a measurable amount of CPU time, since it needs to create an OpenGL
133 shader and have it optimized by the driver; 50–100 ms is not uncommon.
134 Given that 60 fps means each frame is 16.7 ms, you cannot create new scenes in
135 get_scene; every scene you could be using must be created at program start,
136 when your theme is initialized.
138 For any nontrivial theme, there are a lot of possible scenes. Let's
139 return to the case of the MixEffect scene from the previous section.
140 Now let us assume that we could deal with signals that come in at
141 1080p instead of the native 720p. In this case, we will want a high-quality
142 scaler before mixing; *ResampleEffect* provides one::
144 local scene = EffectChain.new(16, 9)
146 local input0 = scene:add_input()
148 local input0_scaled = scene:add_optional_effect(ResampleEffect.new()) -- Implicitly uses input0.
149 input0_scaled:set_int("width", 1280) -- Could also be set in get_scene().
150 input0_scaled:set_int("height", 720)
151 input0_scaled:enable() -- Enable or disable as needed.
153 local input1 = scene:add_input()
155 local input1_scaled = ... -- Similarly here and the rest.
157 input1_scaled:enable_if(some_variable) -- Convenience form for enable() or disable() depending on some_variable.
159 -- The rest is unchanged.
161 Clearly, there are four options here; both inputs could be unscaled,
162 input0 could be scaled but not input1, input1 could be scaled but not input0,
163 or both could be scaled. That means four scenes. However, you don't need to
164 care about this; behind the scenes (no pun intended), Nageru will make all
165 four versions for you and choose the right one as you call enable() or
166 disable() on each effect.
168 Beyond simple on/off switches, an effect can have many *alternatives*,
169 by giving in an array of effects. For instance, it is usually pointless to use
170 the high-quality resampling provided by ResampleEffect for the on-screen
171 outputs; we can use *ResizeEffect* (a simpler scaling algorithm provided
172 directly by the GPU) that instead. The scaling is set up like this::
174 local input0 = scene:add_input()
176 local input0_scaled = scene:add_effect({ResampleEffect.new(), ResizeEffect.new()}) -- Implicitly uses input0.
177 input0_scaled:set_int("width", 1280) -- Just like before.
178 input0_scaled:set_int("height", 720)
180 -- Pick one in get_scene() like this:
181 input0_scaled:choose(ResizeEffect)
183 -- Or by numerical index:
184 input0_scaled:choose(1) -- Chooses ResizeEffect
186 Note that add_effect returns its input for convenience. All alternatives must
187 have the same amount of inputs, with an exception for IdentityEffect, which can
188 coexist with an effect requiring any amount of inputs (if selected, the IdentityEffect
189 just passes its first input unchanged).
191 Actually, add_optional_effect() is just a wrapper around add_effect() with
192 IdentityEffect as the other alternative, and disable() is a convenience version of
193 choose(IdentityEffect).
195 Actually, more versions are created than you'd immediately expect.
196 In particular, the output format for the live output and all previews are
197 different (Y'CbCr versus RGBA), which is also handled transparently for you.
198 Also, the inputs could be interlaced, or they could be images, or videos (see
199 :ref:`images` and :doc:`video`), creating many more options. Again, you
200 generally don't need to care about this; Movit will make sure each and every of
201 those generated scenes runs optimally on your GPU. However, if the
202 combinatorial explosion increases startup time beyond what you are comfortable
203 with, see :ref:`locking`.
209 As we have seen, the theme is king when it determines what to show
210 on screen. However, ultimately, it wants to delegate that power
211 to the operator. The abstraction presented from the theme to the user
212 is in the form of **transitions**. Every frame, Nageru calls the
213 following Lua entry point::
215 function get_transitions(t)
217 (t is again the stream time, but it is provided only for convenience;
218 not all themes would want to use it.) get_transitions must return an array of
219 (currently exactly) three strings, of which any can be blank. These three
220 strings are used as labels on one button each, and whenever the operator clicks
221 one of them, Nageru calls this function in the theme::
223 function transition_clicked(num, t)
225 where “num” is 0, 1 or 2, and t is again the theme time.
227 It is expected that the theme will use this and its internal state
228 to provide the abstraction (or perhaps illusion) of transitions to
229 the user. For instance, a theme will know that the live stream is
230 currently showing input 0 and the preview stream is showing input 1.
231 In this case, it can use two of the buttons to offer “Cut“ or “Fade”
232 transitions to the user. If the user clicks the cut button, the theme
233 can simply switch input and previews, which will take immediate
234 effect on the next frame. However, if the user clicks the fade button,
235 state will need to be set up so that next time get_scene() runs,
236 it will return the scene with the MixEffect, until it determines
237 the transition is over and changes back to showing only one input
238 (presumably the new one).
246 In addition to the live and preview outputs, a theme can declare
247 as many individual **channels** as it wants. These are shown at the
248 bottom of the screen, and are intended for the operator to see
249 what they can put up on the preview (in a sense, a preview of the
252 The number of channels is determined by calling this function
253 once at the start of the program::
255 Nageru.set_num_channels(2)
257 0 is allowed, but doesn't make a lot of sense. Live and preview comes in
260 Each channel will have a label on it; you set it by calling::
262 Nageru.set_channel_name(2, "Side-by-side")
264 Here, channel is 2, 3, 4, etc.—by default, 0 is called “Live” and
265 1 is called “Preview”, and you probably don't need to change this.
267 Each channel has its own scene, starting from number 2 for the first one
268 (since 0 is live and 1 is preview). The simplest form is simply a direct copy
269 of an input, and most themes will include one such channel for each input.
270 (Below, we will see that there are more types of channels, however.)
271 Since the mapping between the channel UI element and inputs is so typical,
272 Nageru allows the theme to simply declare that a channel corresponds to
275 Nageru.set_channel_signal(2, 0)
276 Nageru.set_channel_signal(3, 1)
278 Here, channels 2 and 3 (the two first ones) correspond directly to inputs
279 0 and 1, respectively. The others don't, and return -1. The effect on the
280 UI is that the user can right-click on the channel and configure the input
281 that way; in fact, this is currently the only way to configure them.
283 Furthermore, channels can have a color, which is governed by Nageru calling
284 a function your theme::
286 function channel_color(channel)
288 The theme should return a CSS color (e.g. “#ff0000”, or “cyan”) for each
289 channel when asked; it can vary from frame to frame. A typical use is to mark
290 the currently playing input as red, or the preview as green.
292 And finally, there are two entry points related to white balance::
294 Nageru.set_supports_wb(2, true)
295 function set_wb(channel, red, green, blue)
297 If the first function is called with a true value (at the start of the theme),
298 the channel will get a “Set WB” button next to it, which will activate a color
299 picker. When the user picks a color (ostensibly with a gray point), the second
300 function will be called (with the RGB values in linear light—not sRGB!),
301 and the theme can then use it to adjust the white balance for that channel.
302 The typical way to to this is to have a *WhiteBalanceEffect* on each input
303 and set its “neutral_color” parameter using the “set_vec3” function.
306 More complicated channels: Composites
307 -------------------------------------
309 Direct inputs are not the only kind of channels possible; again, any scene
310 can be output. The most common case is different kinds of **composites**,
311 typically showing side-by-side or something similar. The typical UI presented
312 to the user in this case is that you create a channel that consists of the
313 finished setup; you use ResampleEffect (or ResizeEffect for preview scenes),
314 *PaddingEffect* (to place the rectangles on the screen, one of them with a
315 transparent border) and then *OverlayEffect* (to get both on the screen at
316 the same time). Optionally, you can have a background image at the bottom,
317 and perhaps a logo at the top. This allows the operator to select a pre-made
318 composits, and then transition to and from it from a single camera view (or even
319 between different composites) as needed.
321 Transitions involving composites tend to be the most complicated parts of the theme
322 logic, but also make for the most distinct parts of your visual look.
330 In addition to video inputs, Nageru supports static **image inputs**.
331 These work pretty much the same way as live video inputs. Recall that
332 you chose what input to display like this::
336 Image inputs are instead created by instantiating *ImageInput* and
339 bg = ImageInput.new("bg.jpeg") -- Once, at the start of the program.
340 input:display(bg) -- In get_scene().
342 All image types supported by FFmpeg are supported; if you give in a video,
343 only the first frame is used. The file is checked once every second,
344 so if you update the file on-disk, it will be available in Nageru without
345 a restart. (If the file contains an error, the update will be ignored.)
346 This allows you to e.g. have simple message overlays that you can change
347 without restarting Nageru.
355 In some cases, Nageru may be building in alternatives to a scene that you
356 don't really need, resulting in combinatorial explosion. (If the number of
357 instances is getting high, you will get a warning when finalizing the scene.)
358 For instance, in some cases, you know that a given transition scene will never
359 be used for previews, just live. In this case, you can replace the call to
360 scene:finalize() with::
362 scene:finalize(false)
364 In this case, you guarantee that the scene will never be returned when
365 get_scene() is called with the number 0. (Similarly, you can use true
366 to *only* use it for the live channel.)
368 Similarly, inputs can hold four different input types, but in some scenes,
369 you may always use them with a specific one, e.g. an image “bg_img”. In this case,
370 you may add the input with a specific type right away::
372 scene:add_input(bg_img)
374 Similarly, for a live input, you can do::
378 You can still use scene:display() to change the input, but it needs to be of
379 the same *type* as the one you gave to add_input().
381 Finally, you can specify that some effects only make sense together, reducing
382 the number of possibilities further. For instance, you may have an optional
383 crop effect followed by a resample, where the resample is only enabled if the
384 crop is. If so, you can do this::
386 resample_effect:always_disable_if_disabled(crop_effect)
388 For more advanced exclusions, you may choose to split up the scenes into several
389 distinct ones that you manage yourself; indeed, before Nageru 1.9.0, that was
390 the only option. At some point, however, you may choose to simply accept the
391 added startup time and a bit of extra RAM cost; ease of use and flexibility often
392 trumps such concerns.
400 Complicated themes, especially those dealing with :doc:`HTML inputs <html>`,
401 may have needs for user control that go beyond those of transition buttons.
402 (An obvious example may be “reload the HTML file”.) For this reason,
403 themes can also set simple *theme menus*, which are always visible
404 no matter what inputs are chosen.
406 If a theme chooses to set a theme menu, it will be available on the
407 main menu bar under “Theme”; if not, it will be hidden. You can set
408 the menu at startup or at any other point, using a simple series of
409 labels and function references::
411 function modify_aspect()
412 -- Your code goes here.
415 function reload_html()
420 { "Change &aspect", modify_aspect },
421 { "&Reload overlay", reload_html }
424 When the user chooses a menu entry, the given Lua function will
425 automatically be called. There are no arguments nor return values.
427 Menus can contain submenus, by giving an array instead of a function::
431 { "Version A", select_overlay_a },
432 { "Version B", select_overlay_b }
434 { "&Reload overlay", reload_html }
437 They can also be checkable, or have checkboxes, by adding a third
438 array element containing flags for that::
441 { "Enable overlay", enable_overlay, Nageru.CHECKED }, -- Currently checked.
442 { "Enable crashing", make_unstable, Nageru.CHECKABLE } -- Can be checked, but isn't currently.
445 When such an option is selected, you probably want to rebuild the menu to
446 reflect the new state.
448 There currently is no support for input boxes, sliders,
449 or the likes. However, do note that since the theme is written in unrestricted
450 Lua, so you can use e.g. `lua-http <https://github.com/daurnimator/lua-http>`_
451 to listen for external connections and accept more complicated inputs
457 Signal information queries
458 --------------------------
460 As previously mentioned, get_scene() takes in a “signals” parameter
461 that you can query for information about each signal (numbered from 0;
462 live and preview are channels, not signals), like its current resolution
465 * get_frame_width(signal), get_frame_height(signal): Width and height of the last frame.
466 * get_width(signal), get_height(signal): Width and height of the last *field*
467 (the field height is half of the frame height for an interlaced signal).
468 * get_interlaced(signal): Whether the last frame was interlaced.
469 * get_has_signal(signal): Whether there is a valid input signal.
470 * get_is_connected(signal): Whether there is even a card connected
471 to this signal (USB cards can be swapped in or out); if not,
472 you will get a stream of single-colored frames.
473 * get_frame_rate_nom(signal), get_frame_rate_den(signal): The frame rate
474 of the last frame, as a rational (e.g. 60/1, or 60000/1001 for 59.94).
475 * get_last_subtitle(signal): See :ref:`subtitle-ingest`.
476 * get_human_readable_resolution(signal): The resolution and frame rate in
477 human-readable form (e.g. “1080i59.94”), suitable for e.g. stream titles.
478 Note that Nageru does not follow the EBU recommendation of using
479 frame rate even for interlaced signals (e.g. “1080i25” instead of “1080i50”),
480 since it is little-used and confusing to most users.
482 You can use this either for display purposes, or for choosing the right
483 effect alternatives. In particular, you may want to disable scaling if
484 the frame is already of the correct resolution.