4 **NOTE**: Nageru 1.9.0 made significant improvements to themes
5 and how scenes work. If you use an older version, you may want
6 to look at `the 1.8.6 documentation <https://nageru.sesse.net/doc-1.8.6/>`_;
7 themes written for older versions still work without modification in
8 1.9.0, but are not documented here, and you are advised to change
9 to use the new interfaces, as they are equally powerful and much simpler
12 In Nageru, most of the business logic around how your stream
13 ends up looking is governed by the **theme**, much like how a
14 theme works on a blog or a CMS. Most importantly, the theme
15 governs the look and feel through the different scenes and
16 transitions between them, such that an event gets a consistent
17 visual language even if the operators differ. Instead of requiring the user
18 to modify Nageru's C++ core, themes are written in
19 `Lua <https://www.lua.org/>`_, a lightweight scripting language
22 Themes contain a lot of logic, and writing one can seem a bit
23 daunting at first. However, most events will be happy just tweaking
24 one of the themes included with Nageru, and the operator (if different
25 from the visual designer) will not need to worry at all.
27 Nageru ships with two themes, a default full-featured two-camera
28 setup with side-by-side for e.g. conferences, and a minimal one
29 that is easier for new users to understand.
32 Introduction to scenes
33 ----------------------
35 Anything that's shown on the stream, or on one of the preview displays,
36 is created by a **Movit chain**, instantiated by a Nageru **scene**.
37 `Movit <https://movit.sesse.net/>`_
38 is a library for high-quality, high-performance video filters,
39 and Nageru's themes can use a simplified version of Movit's API where
40 most of the low-level details are abstracted away.
42 Every frame, the theme chooses a **scene** and a set of parameters to it,
43 based on what it thinks the picture should look like. Every scene
44 consists of a set of *inputs* (which can be either live video streams
45 or static pictures) and then a set of operators or *effects* to combine
46 or modify each other. Movit compiles these down to a set of shaders
47 that run in high speed on the GPU; the theme doesn't see a pixel,
48 and thus, Lua's performance (even though good for its language class,
49 especially if you use `LuaJIT <http://luajit.org/>`_) will not matter
56 The simplest possible scene takes only in an input and sends it on
57 to the display (the output of the last added node is always sent to
58 the screen, and in this case, that would be the input)::
60 local scene = Scene.new(16, 9) -- Aspect ratio.
61 local input = scene:add_input()
62 input:display(0) -- First input card. Can be changed whenever you want.
65 The live scene is always processed in full resolution (typically 720p)
66 and then scaled down for the GUI. Preview scenes are rendered in exactly
67 the resolution required, although of course, intermediate steps could be
71 Setting parameters, and the get_scene entry point
72 -------------------------------------------------
74 Many effects support parameters that can vary per-frame. Imagine,
75 for instance, a theme where you want to supports two inputs and fading between
76 them. This means you will need a scene that produces two inputs and
77 produces a mix of them; Movit's *MixEffect* is exactly what you want here::
79 local scene = EffectChain.new(16, 9)
81 local input0 = scene:add_input()
83 local input1 = scene:add_input()
86 local mix_effect = scene:add_effect(MixEffect.new(), input0, input1)
89 Every frame, Movit will call your **get_scene** function, which has
92 function get_scene(num, t, width, height, signals)
94 “width” and “height” are what you'd expect (the output resolution).
95 t contains the current stream time in seconds. “num” contains 0
96 for the live view, 1 for the preview view, and 2, 3, 4, … for each
97 of the individual stream previews. “signals“ contains a bit of
98 information about each input signal, like its current resolution
101 get_scene in return should return a scene. However, before you do that,
102 you can set the parameters **strength_first** and **strength_second**
103 on it; for instance like this::
105 function get_scene(num, t, width, height, signals)
106 -- Assume num is 0 here; you will need to handle the other
108 local fade_progress = 0.0
109 if t >= 1.0 and t >= 2.0: -- Between 1 and 2 seconds; do the fade.
110 fade_progress = t - 1.0
115 mix_effect:set_float("strength_first", 1.0 - fade_progress)
116 mix_effect:set_float("strength_second", fade_progress)
120 Note that in the case where fade_progress is 0.0 or 1.0 (you are just
121 showing one of the inputs), you are wasting GPU power by using the
122 fade scene; you should just return a simpler one-input scene instead.
124 The get_scene function is the backbone of every Nageru theme.
125 As we shall see, however, it may end up dealing with a fair bit
126 of complexity as the theme grows.
129 Scene variants and effect alternatives
130 --------------------------------------
132 Setting up and finalizing a scene is relatively fast, but it still
133 takes a measurable amount of CPU time, since it needs to create an OpenGL
134 shader and have it optimized by the driver; 50–100 ms is not uncommon.
135 Given that 60 fps means each frame is 16.7 ms, you cannot create new scenes in
136 get_scene; every scene you could be using must be created at program start,
137 when your theme is initialized.
139 For any nontrivial theme, there are a lot of possible scenes. Let's
140 return to the case of the MixEffect scene from the previous section.
141 Now let us assume that we could deal with signals that come in at
142 1080p instead of the native 720p. In this case, we will want a high-quality
143 scaler before mixing; *ResampleEffect* provides one::
145 local scene = EffectChain.new(16, 9)
147 local input0 = scene:add_input()
149 local input0_scaled = scene:add_optional_effect(ResampleEffect.new()) -- Implicitly uses input0.
150 scene_or_input.resample_effect:set_int("width", 1280) -- Could also be set in get_scene().
151 scene_or_input.resample_effect:set_int("height", 720)
152 input0_scaled:enable() -- Enable or disable as needed.
154 local input1 = scene:add_input()
156 local input1_scaled = ... -- Similarly here and the rest.
158 -- The rest is unchanged.
160 Clearly, there are four options here; both inputs could be unscaled,
161 input0 could be scaled but not input1, input1 could be scaled but not input0,
162 or both could be scaled. That means four scenes. However, you don't need to
163 care about this; behind the scenes (no pun intended), Nageru will make all
164 four versions for you and choose the right one as you call enable() or
165 disable() on each effect.
167 Beyond simple on/off switches, an effect can have many *alternatives*,
168 by giving in an array of effects. For instance, it is usually pointless to use
169 the high-quality resampling provided by ResampleEffect for the on-screen
170 outputs; we can use *ResizeEffect* (a simpler scaling algorithm provided
171 directly by the GPU) that instead. The scaling is set up like this::
173 local input0 = scene:add_input()
175 local input0_scaled = scene:add_effect({ResampleEffect.new(), ResizeEffect.new()}) -- Implicitly uses input0.
176 scene_or_input.resample_effect:set_int("width", 1280) -- Just like before.
177 scene_or_input.resample_effect:set_int("height", 720)
179 -- Pick one in get_scene() like this:
180 input0_scaled:choose(ResizeEffect)
182 -- Or by numerical index:
183 input0_scaled:choose(1) -- Chooses ResizeEffect
185 Note that add_effect returns its input for convenience.
187 Actually, add_optional_effect() is just a wrapper around add_effect() with
188 IdentityEffect as the other alternative, and disable() is a convenience version of
189 choose(IdentityEffect).
191 Actually, more versions are created than you'd immediately expect.
192 In particular, the output format for the live output and all previews are
193 different (Y'CbCr versus RGBA), which is also handled transparently for you.
194 Also, the inputs could be interlaced, or they could be images, or videos (see
195 :ref:`images` and :doc:`video`), creating many more options. Again, you
196 generally don't need to care about this; Movit will make sure each and every of
197 those generated scenes runs optimally on your GPU. However, if the
198 combinatorial explosion increases startup time beyond what you are comfortable
199 with, see :ref:`locking`.
205 As we have seen, the theme is king when it determines what to show
206 on screen. However, ultimately, it wants to delegate that power
207 to the operator. The abstraction presented from the theme to the user
208 is in the form of **transitions**. Every frame, Nageru calls the
209 following Lua entry point::
211 function get_transitions(t)
213 (t is again the stream time, but it is provided only for convenience;
214 not all themes would want to use it.) get_transitions must return an array of
215 (currently exactly) three strings, of which any can be blank. These three
216 strings are used as labels on one button each, and whenever the operator clicks
217 one of them, Nageru calls this function in the theme::
219 function transition_clicked(num, t)
221 where “num” is 0, 1 or 2, and t is again the theme time.
223 It is expected that the theme will use this and its internal state
224 to provide the abstraction (or perhaps illusion) of transitions to
225 the user. For instance, a theme will know that the live stream is
226 currently showing input 0 and the preview stream is showing input 1.
227 In this case, it can use two of the buttons to offer “Cut“ or “Fade”
228 transitions to the user. If the user clicks the cut button, the theme
229 can simply switch input and previews, which will take immediate
230 effect on the next frame. However, if the user clicks the fade button,
231 state will need to be set up so that next time get_scene() runs,
232 it will return the scene with the MixEffect, until it determines
233 the transition is over and changes back to showing only one input
234 (presumably the new one).
242 In addition to the live and preview outputs, a theme can declare
243 as many individual **channels** as it wants. These are shown at the
244 bottom of the screen, and are intended for the operator to see
245 what they can put up on the preview (in a sense, a preview of the
248 The number of channels is determined by calling this function
249 once at the start of the program::
251 function num_channels()
253 It should simply return the number of channels (0 is allowed,
254 but doesn't make a lot of sense). Live and preview comes in addition to this.
256 Each channel will have a label on it; Nageru asks the theme
257 by calling this function::
259 function channel_name(channel)
261 Here, channel is 2, 3, 4, etc.—0 is always called “Live” and
262 1 is always called “Preview”.
264 Each channel has its own scene, starting from number 2 for the first one
265 (since 0 is live and 1 is preview). The simplest form is simply a direct copy
266 of an input, and most themes will include one such channel for each input.
267 (Below, we will see that there are more types of channels, however.)
268 Since the mapping between the channel UI element and inputs is so typical,
269 Nageru allows the theme to simply declare that a channel corresponds to
270 a given signal, by asking it::
272 function channel_signal(channel)
275 elseif channel == 3 then
282 Here, channels 2 and 3 (the two first ones) correspond directly to inputs
283 0 and 1, respectively. The others don't, and return -1. The effect on the
284 UI is that the user can right-click on the channel and configure the input
285 that way; in fact, this is currently the only way to configure them.
287 Furthermore, channels can have a color::
289 function channel_color(channel)
291 The theme should return a CSS color (e.g. “#ff0000”, or “cyan”) for each
292 channel when asked; it can vary from frame to frame. A typical use is to mark
293 the currently playing input as red, or the preview as green.
295 And finally, there are two entry points related to white balance::
297 function supports_set_wb(channel)
298 function set_wb(channel, red, green, blue)
300 If the first function returns true (called once, at the start of the program),
301 the channel will get a “Set WB” button next to it, which will activate a color
302 picker. When the user picks a color (ostensibly with a gray point), the second
303 function will be called (with the RGB values in linear light—not sRGB!),
304 and the theme can then use it to adjust the white balance for that channel.
305 The typical way to to this is to have a *WhiteBalanceEffect* on each input
306 and set its “neutral_color” parameter using the “set_vec3” function.
309 More complicated channels: Composites
310 -------------------------------------
312 Direct inputs are not the only kind of channels possible; again, any scene
313 can be output. The most common case is different kinds of **composites**,
314 typically showing side-by-side or something similar. The typical UI presented
315 to the user in this case is that you create a channel that consists of the
316 finished setup; you use ResampleEffect (or ResizeEffect for preview scenes),
317 *PaddingEffect* (to place the rectangles on the screen, one of them with a
318 transparent border) and then *OverlayEffect* (to get both on the screen at
319 the same time). Optionally, you can have a background image at the bottom,
320 and perhaps a logo at the top. This allows the operator to select a pre-made
321 composits, and then transition to and from it from a single camera view (or even
322 between different composites) as needed.
324 Transitions involving composites tend to be the most complicated parts of the theme
325 logic, but also make for the most distinct parts of your visual look.
333 In addition to video inputs, Nageru supports static **image inputs**.
334 These work pretty much the same way as live video inputs. Recall that
335 you chose what input to display like this::
339 Image inputs are instead created by instantiating *ImageInput* and
342 bg = ImageInput.new("bg.jpeg") -- Once, at the start of the program.
343 input:display(bg) -- In get_scene().
345 All image types supported by FFmpeg are supported; if you give in a video,
346 only the first frame is used. The file is checked once every second,
347 so if you update the file on-disk, it will be available in Nageru without
348 a restart. (If the file contains an error, the update will be ignored.)
349 This allows you to e.g. have simple message overlays that you can change
350 without restarting Nageru.
358 Complicated themes, especially those dealing with :doc:`HTML inputs <html>`,
359 may have needs for user control that go beyond those of transition buttons.
360 (An obvious example may be “reload the HTML file”.) For this reason,
361 themes can also set simple *theme menus*, which are always visible
362 no matter what inputs are chosen.
364 If a theme chooses to set a theme menu, it will be available on the
365 main menu bar under “Theme”; if not, it will be hidden. You can set
366 the menu at startup or at any other point, using a simple series of
367 labels and function references::
369 function modify_aspect()
370 -- Your code goes here.
373 function reload_html()
378 { "Change &aspect", modify_aspect },
379 { "&Reload overlay", reload_html }
382 When the user chooses a menu entry, the given Lua function will
383 automatically be called. There are no arguments nor return values.
385 There currently is no support for checkboxes, submenus, input boxes
386 or the likes. However, do note that since the theme is written in unrestricted
387 Lua, so you can use e.g. `lua-http <https://github.com/daurnimator/lua-http>`_
388 to listen for external connections and accept more complicated inputs