From 9a3f9170dcdd59b37480d0dd81f864ea96109839 Mon Sep 17 00:00:00 2001 From: "Steinar H. Gunderson" Date: Mon, 31 Dec 2018 00:59:40 +0100 Subject: [PATCH] Write a bit about the Futatabi video format specification. --- analyzer.rst | 2 ++ futatabi.rst | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/analyzer.rst b/analyzer.rst index 09f6240..f1adf5f 100644 --- a/analyzer.rst +++ b/analyzer.rst @@ -14,6 +14,8 @@ can help with this. It allows you to look at any input, grab a frame (manually or periodically), and then hover over specific pixels to look at their RGB values. When you're done, simply close it, and it will stop grabbing frames. +.. _synthetictests: + Synthetic tests and common problems ----------------------------------- diff --git a/futatabi.rst b/futatabi.rst index a291c89..f1fff5a 100644 --- a/futatabi.rst +++ b/futatabi.rst @@ -93,9 +93,55 @@ Transferring data to and from Nageru Video format specification '''''''''''''''''''''''''' +Futatabi expects to get data in MJPEG format only; though MJPEG is old, +it yields fairly good quality per bit for an intraframe format, supports +4:2:2 without too many issues, and has hardware support through VA-API +for both decode (since Ivy Bridge) and encode (since Skylake). The latter +is especially important for Futatabi, since there are so many high-resolution +streams; software encode/decode of several 1080p60 streams at the same time +is fairly taxing on the CPU if done in software. This means we can easily +send 4:2:2 camera streams back and forth between Nageru and Futatabi without having +to scale or do other lossy processing (except of course the compression itself). + +However, JPEG as such does not have any way of specifying things like color +spaces and chroma placement. JFIF, the *de facto* JPEG standard container, +specifies conventions that are widely followed, but they do not match what +comes out of a capture card. Nageru's multicam export _does_ set the appropriate +fields in the output Matroska mux (which is pretty much the only mux that can +hold such information), but there are few if any programs that read them and give +them priority over JFIF's defaults. Thus, if you want to use the multicam stream +for something other than Futatabi, or feed Futatabi with data not from Nageru, +there are a few subtle issues to keep in mind. + +In particular: + + * Capture cards typically send limited-range Y'CbCr (luma between 16..235 + and chroma between 16..240); JFIF is traditionally full-range (0..255 + for both). (See also :ref:`synthetictests`.) Note that there is a special + private JPEG comment added to signal this, which FFmpeg understands. + * JFIF, like MPEG, assumes center chroma placement; capture cards and most + modern video standards assume left. + * JFIF assumes Rec. 601 Y'CbCr coefficients, while all modern HD processing + uses Rec. 709 Y'CbCr coefficients. (Futatabi does not care much about + the actual RGB color space; Nageru assumes it is Rec. 709, like for capture + cards, but the differences between 601 and 709 here are small. sRGB gamma + is assumed throughout, like in JFIF.) + +Many players may also be confused by the fact that the resolution can change +from frame to frame; this is because for original (uninterpolated) frames, +Futatabi will simply output the received JPEG frame directly to the output +stream, which can be a different resolution from the interpolated frames. + +Finally, the subtitle track with status information (see :ref:`talkback`) is +not marked as metadata due to FFmpeg limitations, and as such will show up +raw in subtitle-enabled players. + + Monitoring ---------- +.. _talkback: + Tally and status talkback ''''''''''''''''''''''''' -- 2.39.2