-the next frame will resume playing.
-
-
-Integration with CasparCG
--------------------------
-
-`CasparCG <http://casparcg.com/>`_ is an open-source broadcast graphics system,
-originally written by SVT, the Swedish public TV broadcaster. (In this
-context, “graphics” refers mostly to synthetically generated content,
-such as the score box in a sporting match.) With some coaxing, it is possible
-to integrate CasparCG with Nageru, so that Nageru does the mixing of the video
-sources and CasparCG generates graphics—CasparCG can also work as a standalone
-mixer indepedently of Nageru, but this will not be covered here.
-
-The most straightforward use of CasparCG is to use it to generate an overlay,
-which is then taken in as a video input in Nageru. To achieve this, the simplest
-solution is to send raw BGRA data over a UNIX domain socket [#rawvideo]_, which involves
-adding an FFmpeg output to your CasparCG configuration. This can either be done
-by modifying your casparcg.config to open up a socket in your home directory
-(you shouldn't use /tmp on a multi-user machine, or you may open up a security
-hole)::
-
- <consumers>
- <ffmpeg>
- <device>1</device>
- <path>unix:///home/user/caspar.sock</path>
- <args>-c:v rawvideo -vf format=pix_fmts=bgra -f nut -listen 1</args>
- </ffmpeg>
- <system-audio></system-audio>
- </consumers>
-
-or by setting it up on-the-fly through ACMP::
-
- add 1 stream unix:///home/user/caspar.sock -c:v rawvideo -vf format=pix_fmts=bgra -f nut -listen 1
-
-You can then use *unix:///home/user/caspar.sock* as a video input to Nageru on the
-same machine, and then use e.g. *OverlayEffect* to overlay it on your video chains.
-(Remember to set up the video as BGRA and not Y'CbCr, so that you get alpha.)
-
-CasparCG and Nageru does not run with synchronized clocks, so you will not get
-frame-perfect synchronization between graphics and overlay; however, this is normal
-even in a hardware chain, and most overlay graphics does not need to be timed
-to the input more than through a few frames. However, note that it is possible
-that Nageru lags behind CasparCG's graphics production after a while (typically
-on the order of hours) due to such clock skew; the easiest solution to this is
-just to use *change_rate(2.0)* or similar on the input, so that Nageru will consume
-CasparCG's frames as quickly as they come in without waiting any further.
-
-There's also one usability stumbling block: *CasparCG's FFmpeg
-streams are one-shot, and so are FFmpeg's UNIX domain sockets.* This means that,
-in practice, if Nageru ever disconnects from CasparCG for any reason, the socket
-is “used up”, and you will need to recreate it somehow (e.g., by restarting CasparCG).
-Also note that the old socket still lingers in place even after being useless,
-so you will _first_ need to remove it, and CasparCG does not do this for you.
-The simplest way to deal with this is probably to have a wrapper script of some
-sort that orchestrates Nageru, CasparCG and your client for you, so that everything
-is taken up and down in the right order; it may be cumbersome and require some
-tweaking for oyur specific case, but it's not a challenging problem per se.
-
-Nageru does not have functionality to work as a CasparCG client in itself,
-nor can your theme present a very detailed UI to do so. However, do note that
-since the theme is written in unrestricted Lua, so you can use e.g.
-`lua-http <https://github.com/daurnimator/lua-http>`_ to send signals
-to your backend (assuming it speaks HTTP) on e.g. transition changes.
-With some creativity, this allows you to at least bring some loose coupling
-between the two.
-
-In general, the integration between Nageru and CasparCG leaves a bit to be
-desired, and in general, the combination of CasparCG + Nageru will require
-a beefire machine than Nageru alone. However, it also provides a much richer
-environment for graphics, so for many use cases, it will be worth it.
-
-.. [#rawvideo] Good video codecs that support alpha are rare, so as long as CasparCG
- and Nageru are running on the same machine, raw video is probably your
- best bet. Even so, note that FFmpeg's muxers are not really made for
- such large amounts of data that raw HD video produces, so there will
- be some performance overhead on both sides of the socket.
+the next frame will resume playing. Be aware that changing the rate may
+make the audio behave unpredictably; there are no attempts to do time
+stretching or change the pitch accordingly.
+
+Finally, if you want to forcibly abort the playing of a video,
+even one that is blocking on I/O, you can use::
+
+ video:disconnect()
+
+This is particularly useful when dealing with network streams, as FFmpeg does not
+always properly detect if the connection has been lost. See :ref:`menus`
+for a way to expose such functionality to the operator.
+
+.. _subtitle-ingest:
+
+Ingesting subtitles
+-------------------
+
+Video streams can contain separate subtitle tracks. This is particularly useful when using Nageru
+and Futatabi together (see :ref:`talkback`).
+
+To get the last subtitle given before the current video frame, call
+“signals:get_last_subtitle(n)” from get_chain, where n is the signal number
+of your video signal. It will either contain nil, if there hasn't been
+a subtitle, or else the raw subtitle. Note that if the video frame and
+the subtitle occur on the exact same timestamp, and the video frame
+is muxed before the subtitle packet, the subtitle will not make it in time.
+(Futatabi puts the subtitle slightly ahead of the video frame to avoid this.)