X-Git-Url: https://git.sesse.net/?a=blobdiff_plain;f=futatabi.rst;h=76ee84149f64453784dd06528fbdbd8f38f64989;hb=1eec4ef47957c81d445c4b104e56ba2d8917d267;hp=a291c89a8684b3500bc79eb5a25b46bbe49e6a23;hpb=cfd8d011eac9326dfd77d13170e628c087b35a1e;p=nageru-docs diff --git a/futatabi.rst b/futatabi.rst index a291c89..76ee841 100644 --- a/futatabi.rst +++ b/futatabi.rst @@ -51,6 +51,113 @@ you also build Futatabi. Getting started --------------- +Futatabi always pulls data from Nageru over the network; it doesn't support SDI +input or output. Assuming you have a recent version of Nageru (typically one +that comes with Futatabi), it is capable of sending all of its cameras as one +video stream (see :ref:`futatabiformat`), so you can start Futatabi with + + ./futatabi http://path.to.nageru.host/multicam.mp4 + +If you do not have a running Nageru installation, see :ref:`sampledata`. + +Once you are up and connected, Futatabi will start recording all of the frames +to disk. This process happens continuously for as long as you have disk space, +even if you are playing something back or editing clips, and the streams are +displayed in real time in mini-display as they come in. You make replays in +the form of *clips* (the top list is the clip list), which are then queued into +the *playlist* (the bottom list). Your end result is a live HTTP stream that +can be fed back into Nageru as a video input; by default, Futatabi listens +on port 9096. + +Futatabi has the concept of a *workspace*, which defaults to your current +directory (you can change it with the -d option). This holds all of the +recorded frames, all metadata about clips, and preferences set from the menus +(interpolation quality and cue point padding). If you quit Futatabi and +restart it (or you go down as the result of a power failure or the likes), +it will remember all the state and frames for you. + +Basic UI operations +''''''''''''''''''' + +Nageru can be operated using either the keyboard and mouse, or using a MIDI +controller with some help of the mouse. In this section, we will be discussing +the keyboard and mouse only; see :ref:`midi` for details on using MIDI +controllers. + +A clip in the clip list consists simply of an in and an out point; it represents +an interval of physical time (although timed to the video clock). A clip in the +playlist contains the same information plus some playback parameters, in particular +which camera (stream) to play back. + +Clicking the ”Cue in” button, or pressing the A key, will start a new clip in +the clip list that begins at the current time. Similarly, clicking the ”Cue +out” button, or pressing the S key, will set the end point for said clip. +(If you click a button twice, it will overwrite the previous result.) +Now it is in a state where you can queue it to the play list (mark the camera +you want to use and click the “Queue” button, or Q on the keyboard) for +playing, or you can preview them using the “Preview” button (W on the +keyboard). Previewing can be done either from the clip list the clip list or +the playlist; they will not be interpolated or faded, but will be played back +in the right + +You can edit cue points, both in the clip and the playlist, in two ways: +Either use the scroll wheel on the mouse, or hold down the left mouse button +and scrub left or right. (You can hold down Shift to change ten times as fast, +or Alt to change at one-tenth the speed.) You'll see the new cue points as you +change them in the preview window. You can also give clips names; these don't +mean anything, but can be good references to locate a specific kind of clip +for later use. Once you're happy with a playlist, and your producer is ready +to cut to your channel, click on the first clip you want to play back and +click the “Play” button (space on the keyboard); the result will both be +visible in the top screen and go out live over the network to Nageru. + +On top of these basics, there are many possible workflows; we'll discuss only +two. Try out a few and see which ones fit your style and type of event. + +Repeated cue-in +''''''''''''''' + +In many sports, you don't necessarily know when a replay-worthy event has happened +before it's already happened. However, you may reasonably know when something is +*not* happening, and it would be a good time to start a clip if something is happening +immediately afterwards. At these points, you can make repeated cue-ins, ie., start +a clip without finishing it. As long as you keep making cue-ins, the previous one +will be discarded. Once you see that something *is* happening, you can wait until +it's done, and then do a cue-out, which gives you a good clip immediately. + +For instance, in a basketball game, you could be seeing a series of uninteresting +passes, clicking cue-in on each of them. However, once it looks like there's an +opportunity for a score, you can hold and see what happens; if the shot happens, +you can click cue-out, and if not, you can go back. + +Before playing the clip, you can make adjustments to the in and out points +as detailed above. This will help you trim away any uninteresting lead-ups, +or add more margins for fades. If you consistently find that you have too +little margin, you can use the *cue point padding* feature (either from the +command line using *--cue-point-padding*, or set from the menu). If you set +cue point padding to e.g. two seconds, the cue-in point will automatically be set +two seconds ago when you cue-in, and the cue-out point will be set two seconds +into the future when you cue-out. + + +Instant clips +''''''''''''' + +Like the previous section explained how you generally would know the *start* +of an interesting event (at least if discarding most of the candidates), +you would be even more sure about the *end* of one. Thus, you can simply wait +until something interesting has happened, and then click cue-in immediately +followed by cue-out. This will give you a clip of near zero length, ending +at the right point. Then, edit this clip to set the starting point as needed, +and it's ready to play. + +Again, you can use the cue point padding feature to your advantage; if so, +your clips will not be of zero length, but rather of some predefined length +given by your chosen cue point padding. + + +.. _sampledata: + Sample multicamera data ''''''''''''''''''''''' @@ -81,9 +188,6 @@ in tripod, two fixed endzone overhead cameras) with differing quality depending on the camera operators. In short, they should be realistic input material to practice with. -Please download these files only once, instead of streaming them directly over -HTTP each time you want to test. - Transferring data to and from Nageru ------------------------------------ @@ -93,9 +197,61 @@ Transferring data to and from Nageru Video format specification '''''''''''''''''''''''''' +Futatabi expects to get data in MJPEG format only; though MJPEG is old, +it yields fairly good quality per bit for an intraframe format, supports +4:2:2 without too many issues, and has hardware support through VA-API +for both decode (since Ivy Bridge) and encode (since Skylake). The latter +is especially important for Futatabi, since there are so many high-resolution +streams; software encode/decode of several 1080p60 streams at the same time +is fairly taxing on the CPU if done in software. This means we can easily +send 4:2:2 camera streams back and forth between Nageru and Futatabi without having +to scale or do other lossy processing (except of course the compression itself). + +However, JPEG as such does not have any way of specifying things like color +spaces and chroma placement. JFIF, the *de facto* JPEG standard container, +specifies conventions that are widely followed, but they do not match what +comes out of a capture card. Nageru's multicam export _does_ set the appropriate +fields in the output Matroska mux (which is pretty much the only mux that can +hold such information), but there are few if any programs that read them and give +them priority over JFIF's defaults. Thus, if you want to use the multicam stream +for something other than Futatabi, or feed Futatabi with data not from Nageru, +there are a few subtle issues to keep in mind. + +In particular: + + * Capture cards typically send limited-range Y'CbCr (luma between 16..235 + and chroma between 16..240); JFIF is traditionally full-range (0..255 + for both). (See also :ref:`synthetictests`.) Note that there is a special + private JPEG comment added to signal this, which FFmpeg understands. + * JFIF, like MPEG, assumes center chroma placement; capture cards and most + modern video standards assume left. + * JFIF assumes Rec. 601 Y'CbCr coefficients, while all modern HD processing + uses Rec. 709 Y'CbCr coefficients. (Futatabi does not care much about + the actual RGB color space; Nageru assumes it is Rec. 709, like for capture + cards, but the differences between 601 and 709 here are small. sRGB gamma + is assumed throughout, like in JFIF.) + +Many players may also be confused by the fact that the resolution can change +from frame to frame; this is because for original (uninterpolated) frames, +Futatabi will simply output the received JPEG frame directly to the output +stream, which can be a different resolution from the interpolated frames. + +Finally, the subtitle track with status information (see :ref:`talkback`) is +not marked as metadata due to FFmpeg limitations, and as such will show up +raw in subtitle-enabled players. + + +.. _midi: + +Using MIDI controllers +---------------------- + + Monitoring ---------- +.. _talkback: + Tally and status talkback '''''''''''''''''''''''''