From fbe04462dcf0578932141d66faf3623f86cfded4 Mon Sep 17 00:00:00 2001 From: "Steinar H. Gunderson" Date: Thu, 7 Sep 2023 18:46:57 +0200 Subject: [PATCH] Drop documentation of very old versions. Debian oldstable has 2.0.1, so don't bother documenting anything older than that (once the documentation has caught up with that; for the time being, we are at 1.9.3). --- analyzer.rst | 2 +- audio.rst | 5 ++--- futatabi.rst | 13 +++++-------- hardware.rst | 2 +- hdmisdi.rst | 2 +- html.rst | 2 +- index.rst | 3 +++ monitoring.rst | 2 +- streaming.rst | 7 +++---- theme.rst | 35 +++++++---------------------------- v4l2.rst | 2 +- video.rst | 17 +++-------------- 12 files changed, 29 insertions(+), 63 deletions(-) diff --git a/analyzer.rst b/analyzer.rst index f1adf5f..5496eff 100644 --- a/analyzer.rst +++ b/analyzer.rst @@ -9,7 +9,7 @@ or that one end or the other is doing something wrong in the HDMI handshake.) This means that it can be useful to verify that there are no subtle color shifts, and that the entire 0–255 brightness range is both being input and output. -The **frame analyzer**, new in Nageru 1.6.0 and available from the video menu, +The **frame analyzer**, available from the video menu, can help with this. It allows you to look at any input, grab a frame (manually or periodically), and then hover over specific pixels to look at their RGB values. When you're done, simply close it, and it will stop grabbing frames. diff --git a/audio.rst b/audio.rst index 61bcf2d..24c71ba 100644 --- a/audio.rst +++ b/audio.rst @@ -23,8 +23,7 @@ use, reusing such a mix isn't the worst choice you can make. Simple mode ----------- -**Simple** audio mode is the default, and was the only mode available -up until Nageru 1.4.0. Despite its name, it contains a powerful +**Simple** audio mode is the default. Despite its name, it contains a powerful audio processing chain; however, in many cases, you won't need to understand or twiddle any of the knobs available. @@ -329,7 +328,7 @@ and if you have a reasonable ear, you can use the EQ to your advantage to make them sound a little more even on the stream. Either that, or just put it in neutral, and the entire EQ code will be bypassed. -Finally (or, well, first), since 1.7.3, there's the **stereo width** knob. +Finally (or, well, first), there's the **stereo width** knob. At the default, 100%, it makes no change to the signal, but if you turn it to 0% (at the middle), the signal becomes perfect mono. Between these two, there's a range where the channels leak partially over into each other. diff --git a/futatabi.rst b/futatabi.rst index c2a7665..a0aa06a 100644 --- a/futatabi.rst +++ b/futatabi.rst @@ -25,8 +25,7 @@ by Kroeger et al, together with about half of the algorithm from optical flow), although this may change in the future. Since Futatabi is part of the Nageru source distribution, its version number -mirrors Nageru. Thus, the first version of Futatabi is version 1.8.0, -which is the Nageru version when it was first introduced. +mirrors Nageru. System requirements @@ -192,8 +191,7 @@ for later if they can't show it right now (e.g. a foul situation that wasn't cal Audio support ''''''''''''' -Since version 1.8.5, Futatabi has limited audio support. It is recorded -(assuming Nageru is also at version 1.8.5 or newer) and saved for all inputs, +Futatabi has limited audio support. It is recorded and saved for all inputs, and played back when showing a replay, but only when the replay speed is at 100%, or very close to it. (At other speeds, you will get silence.) Furthermore, there is no local audio output; the Futatabi operator will not @@ -204,10 +202,10 @@ hear any audio, unless they use a video player into the Futatabi stream locally White balance ''''''''''''' -Since version 1.9.2, Futatabi will integrate with Nageru for white balance; +Futatabi integrates with Nageru for white balance; the white balance set in Nageru will be recorded, and properly applied on playback (including fades). Note that this assumes you are using the built-in -white balance adjustment (new in 1.9.2), not adding WhiteBalanceEffect manually +white balance adjustment, not adding WhiteBalanceEffect manually to the scene; see :ref:`white-balance` for an example. @@ -242,8 +240,7 @@ command line using *--cue-in-point-padding* and *--cue-out-point-padding*, or se cue in point padding to e.g. two seconds, the cue-in point will automatically be set two seconds ago when you cue-in, and similarly, if you set cue out point padding, the cue-out point will be set two seconds -into the future when you cue-out. (Cue-in and cue-out point padding were one -joint setting before Futatabi 1.8.3.) +into the future when you cue-out. Instant clips diff --git a/hardware.rst b/hardware.rst index 5171bae..a65ff8e 100644 --- a/hardware.rst +++ b/hardware.rst @@ -39,7 +39,7 @@ output as a sort of “digital intermediate” that can much easier be stored to (for future editing or re-streaming) or sent to an encoder on another machine for final streaming. -Although you can (since Nageru 1.5.0) use software encoding through x264 for +Although you can use software encoding through x264 for the digital intermediate, it is generally preferred to use a hardware encoder if it is available. Currently, VA-API is the only hardware encoding method supported for encoding the digital intermediate, although Nageru might support diff --git a/hdmisdi.rst b/hdmisdi.rst index f6f032c..91f6c9f 100644 --- a/hdmisdi.rst +++ b/hdmisdi.rst @@ -8,7 +8,7 @@ the stream on another PC, but for many uses, the end-to-end latency is too high, and you might not want to involve a full extra PC just for this anyway. -Thus, since 1.5.0, Nageru supports using a spare output card for HDMI/SDI +Thus, Nageru supports using a spare output card for HDMI/SDI output, turning it into a simple, reasonably low-latency audio/video switcher. diff --git a/html.rst b/html.rst index 9a62a9b..92eecdd 100644 --- a/html.rst +++ b/html.rst @@ -7,7 +7,7 @@ in full HTML5 (including JavaScript) by way of `Chromium Embedded Framework `_ (CEF for short), a library that basically embeds a full copy of the Chromium web browser into Nageru. -HTML inputs are available from Nageru 1.7.0 onwards, but note that they are an +Note that HTML inputs are an optional component, since many distributions don't carry CEF. Thus, your copy of Nageru may not support them. See :ref:`compiling` for information on how to build Nageru with CEF. diff --git a/index.rst b/index.rst index d543850..c9fa25b 100644 --- a/index.rst +++ b/index.rst @@ -5,6 +5,9 @@ Nageru is a live video mixer, based around the standard M/E workflow. This online documentation aims to give a comprehensive introduction to all of Nageru's features. +This documentation assumes at least Nageru 1.9.3; if any features +described require a newer version, it will be explicitly noted. + Contents: .. toctree:: diff --git a/monitoring.rst b/monitoring.rst index d5b490c..4e9a6fb 100644 --- a/monitoring.rst +++ b/monitoring.rst @@ -3,7 +3,7 @@ Monitoring If you have a large Nageru installation, you may want to monitor it remotely, as opposed to just eyeballing the UI and occasionally checking into the stream -to verify everything is OK. As of version 1.6.1, Nageru supports native +to verify everything is OK. Nageru supports native `Prometheus `_ metrics. This section is not intended to explain Prometheus; for that, see the Prometheus diff --git a/streaming.rst b/streaming.rst index da42712..d34600a 100644 --- a/streaming.rst +++ b/streaming.rst @@ -17,8 +17,7 @@ approaches for streaming: **Transcoded** or **direct**. Transcoded streaming -------------------- -Transcoded streaming was the only option supported before 1.3.0, -and in many ways the conceptually simplest from Nageru's point of +Transcoded streaming is in many ways the conceptually simplest from Nageru's point of view. In this mode, Nageru outputs its “digital intermediate” H.264 stream (see :ref:`digital-intermediate`), and you are responsible for transcoding it into a format that is suitable @@ -142,7 +141,7 @@ Transcoding with Kaeru ---------------------- There is a third option that combines elements from the two previous -approaches: Since version 1.6.1, Nageru includes **Kaeru**, named after the +approaches: Nageru includes **Kaeru**, named after the Japanese verb *kaeru* (換える), meaning roughly to replace or exchange. Kaeru is a command-line tool that is designed to transcode Nageru's streams. In that aspect, it is similar to using VLC as described in the section on @@ -211,7 +210,7 @@ earlier, just adding “metacube” to the HTTP options:: Single-camera stream -------------------- -In addition to the regular mixed stream, you can (since Nageru 1.9.2) +In addition to the regular mixed stream, you can siphon out MJPEG streams consisting of a single camera only. This is useful either for running a very cheap secondary stream (say, a static overview camera that you would like to show on a separate screen somewhere), or for simple diff --git a/theme.rst b/theme.rst index 645584c..8dbaf80 100644 --- a/theme.rst +++ b/theme.rst @@ -1,14 +1,6 @@ The theme ========= -**NOTE**: Nageru 1.9.0 made significant improvements to themes -and how scenes work. If you use an older version, you may want -to look at `the 1.8.6 documentation `_; -themes written for older versions still work without modification in -1.9.0, but are not documented here, and you are advised to change -to use the new interfaces, as they are equally powerful and much simpler -to work with. - In Nageru, most of the business logic around how your stream ends up looking is governed by the **theme**, much like how a theme works on a blog or a CMS. Most importantly, the theme @@ -317,8 +309,7 @@ is:: If the first function is called with a true value (at the start of the theme), the channel will get a “Set WB” button next to it, which will activate a color picker, to select the gray point. To actually *apply* this white balance change, -you have two options. If you're using Nageru 1.9.2 or newer, it's as simple -as adding one element to the scene:: +add the white balance element to the scene:: scene:add_white_balance() @@ -327,17 +318,6 @@ connected to, and fetch its gray point if needed. (If it is connected to e.g. a mix of several inputs, such as a camera and an overlay, you will need to give the input to fetch white balance from as as a parameter.) -If, on the other hand, you are using Nageru 1.9.1 or older (or just wish -for more manual control), there's an entry point you will need to implement:: - - function set_wb(channel, red, green, blue) - -When the user picks a gray point, this function -function will be called (with the RGB values in linear light—not sRGB!), -and the theme can then use it to adjust the white balance for that channel. -The typical way to to this is to have a *WhiteBalanceEffect* on each input -and set its “neutral_color” parameter using the “set_vec3” function. - More complicated channels: Composites ------------------------------------- @@ -421,7 +401,7 @@ crop is. If so, you can do this:: resample_effect:always_disable_if_disabled(crop_effect) -Also, since Nageru 1.9.1, you can disable an optional effect if a given other +Also, you can disable an optional effect if a given other effect is *enabled*:: overlay1_effect:promise_to_disable_if_enabled(overlay2_effect) @@ -433,10 +413,9 @@ two to disable. (If you violate the promise, you will get an error message at runtime.) It can still be useful for reducing the number of alternatives, though. For more advanced exclusions, you may choose to split up the scenes into several -distinct ones that you manage yourself; indeed, before Nageru 1.9.0, that was -the only option. At some point, however, you may choose to simply accept the -added startup time and a bit of extra RAM cost; ease of use and flexibility often -trumps such concerns. +distinct ones that you manage yourself. At some point, however, you may choose +to simply accept the added startup time and a bit of extra RAM cost; ease of +use and flexibility often trumps such concerns. .. _menus: @@ -537,7 +516,7 @@ Audio control Before you attempt to control audio from the theme, be sure to have read the documentation about :doc:`audio`. -Since Nageru 1.9.2, the theme has a certain amount of control over the audio +The theme has a certain amount of control over the audio mix, assuming that you are in multichannel mode. This is useful in particular to be able to set defaults, if e.g. one channel should always be muted at startup, or to switch in/out certain channels depending on whether they are @@ -584,7 +563,7 @@ Overriding the status line -------------------------- Some users may wish to override the status line, e.g. with recording time. -If so, it is possible (since Nageru 1.9.1) to declare a function **format_status_line**:: +If so, it is possible to declare a function **format_status_line**:: function format_status_line(disk_space_text, file_length_seconds) if file_length_seconds > 86400.0 then diff --git a/v4l2.rst b/v4l2.rst index 8adde62..8a10add 100644 --- a/v4l2.rst +++ b/v4l2.rst @@ -1,7 +1,7 @@ V4L2 output =========== -Since Nageru 1.9.3, Nageru has had (in addition to the regular +Nageru has (in addition to the regular :doc:`hdmisdi`) support for output to Linux' Video4Linux API (also known as V4L2, or just V4L). This is intended for one specific purpose, namely using Nageru as a camera/mixer for diff --git a/video.rst b/video.rst index ee2c1b5..5af1ef6 100644 --- a/video.rst +++ b/video.rst @@ -8,8 +8,7 @@ they also carry video). The most obvious example would be a video file on disk flexible and can be used also for other things. Before reading trying to use video inputs, you should read and understand how -themes work in general (see :doc:`theme`). Video inputs are available from -Nageru 1.6.0 onwards, and from Nageru 1.7.2, you can get audio from video inputs +themes work in general (see :doc:`theme`). You can get audio from video inputs like any other input. (Be advised, though, that making a general video player that can maintain A/V sync on all kinds of video files is a hard problem, so there may still be bugs in this support.) @@ -49,8 +48,6 @@ It can then be display on an input like usual:: input:display(video) Note that interlaced video is currently not supported, not even with deinterlacing. -(Before Nageru 1.9.0, video inputs were distinct from live inputs, and had to -be created differently, but this is no longer the case.) Videos run in the correct frame rate and on their own timer (generally the system clock in the computer), and loop when they get to the end or whenever an @@ -94,7 +91,7 @@ make the audio behave unpredictably; there are no attempts to do time stretching or change the pitch accordingly. Finally, if you want to forcibly abort the playing of a video, -even one that is blocking on I/O, you can use (since Nageru 1.7.2):: +even one that is blocking on I/O, you can use:: video:disconnect() @@ -107,8 +104,7 @@ for a way to expose such functionality to the operator. Ingesting subtitles ------------------- -Video streams can contain separate subtitle tracks. Since Nageru 1.8.1, -you can read these streams. This is particularly useful when using Nageru +Video streams can contain separate subtitle tracks. This is particularly useful when using Nageru and Futatabi together (see :ref:`talkback`). To get the last subtitle given before the current video frame, call @@ -118,10 +114,3 @@ a subtitle, or else the raw subtitle. Note that if the video frame and the subtitle occur on the exact same timestamp, and the video frame is muxed before the subtitle packet, the subtitle will not make it in time. (Futatabi puts the subtitle slightly ahead of the video frame to avoid this.) - - -Integration with CasparCG -------------------------- - -This section has been removed, as since 1.7.0, Nageru has :doc:`native support for HTML5 inputs `, -obsoleting CasparCG integration. -- 2.39.2