+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html401/loose.dtd">
+<html>
+<!-- Created on July 23, 2011 by texi2html 1.82
+texi2html was written by:
+ Lionel Cons <Lionel.Cons@cern.ch> (original author)
+ Karl Berry <karl@freefriends.org>
+ Olaf Bachmann <obachman@mathematik.uni-kl.de>
+ and many others.
+Maintained by: Many creative people.
+Send bugs and suggestions to <texi2html-bug@nongnu.org>
+-->
+<head>
+<title>ffmpeg Documentation</title>
+
+<meta name="description" content="ffmpeg Documentation">
+<meta name="keywords" content="ffmpeg Documentation">
+<meta name="resource-type" content="document">
+<meta name="distribution" content="global">
+<meta name="Generator" content="texi2html 1.82">
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+<style type="text/css">
+<!--
+a.summary-letter {text-decoration: none}
+blockquote.smallquotation {font-size: smaller}
+pre.display {font-family: serif}
+pre.format {font-family: serif}
+pre.menu-comment {font-family: serif}
+pre.menu-preformatted {font-family: serif}
+pre.smalldisplay {font-family: serif; font-size: smaller}
+pre.smallexample {font-size: smaller}
+pre.smallformat {font-family: serif; font-size: smaller}
+pre.smalllisp {font-size: smaller}
+span.roman {font-family:serif; font-weight:normal;}
+span.sansserif {font-family:sans-serif; font-weight:normal;}
+ul.toc {list-style: none}
+-->
+</style>
+
+
+</head>
+
+<body lang="en" bgcolor="#FFFFFF" text="#000000" link="#0000FF" vlink="#800080" alink="#FF0000">
+
+<a name="SEC_Top"></a>
+<h1 class="settitle">ffmpeg Documentation</h1>
+
+<a name="SEC_Contents"></a>
+<h1>Table of Contents</h1>
+<div class="contents">
+
+<ul class="toc">
+ <li><a name="toc-Synopsis" href="#Synopsis">1. Synopsis</a></li>
+ <li><a name="toc-Description" href="#Description">2. Description</a></li>
+ <li><a name="toc-Options-5" href="#Options-5">3. Options</a>
+ <ul class="toc">
+ <li><a name="toc-Generic-options" href="#Generic-options">3.1 Generic options</a></li>
+ <li><a name="toc-Main-options" href="#Main-options">3.2 Main options</a></li>
+ <li><a name="toc-Video-Options" href="#Video-Options">3.3 Video Options</a></li>
+ <li><a name="toc-Advanced-Video-Options" href="#Advanced-Video-Options">3.4 Advanced Video Options</a></li>
+ <li><a name="toc-Audio-Options" href="#Audio-Options">3.5 Audio Options</a></li>
+ <li><a name="toc-Advanced-Audio-options_003a" href="#Advanced-Audio-options_003a">3.6 Advanced Audio options:</a></li>
+ <li><a name="toc-Subtitle-options_003a" href="#Subtitle-options_003a">3.7 Subtitle options:</a></li>
+ <li><a name="toc-Audio_002fVideo-grab-options" href="#Audio_002fVideo-grab-options">3.8 Audio/Video grab options</a></li>
+ <li><a name="toc-Advanced-options" href="#Advanced-options">3.9 Advanced options</a></li>
+ <li><a name="toc-Preset-files" href="#Preset-files">3.10 Preset files</a></li>
+ </ul></li>
+ <li><a name="toc-Tips" href="#Tips">4. Tips</a></li>
+ <li><a name="toc-Examples" href="#Examples">5. Examples</a>
+ <ul class="toc">
+ <li><a name="toc-Video-and-Audio-grabbing" href="#Video-and-Audio-grabbing">5.1 Video and Audio grabbing</a></li>
+ <li><a name="toc-X11-grabbing" href="#X11-grabbing">5.2 X11 grabbing</a></li>
+ <li><a name="toc-Video-and-Audio-file-format-conversion" href="#Video-and-Audio-file-format-conversion">5.3 Video and Audio file format conversion</a></li>
+ </ul></li>
+ <li><a name="toc-Expression-Evaluation" href="#Expression-Evaluation">6. Expression Evaluation</a></li>
+ <li><a name="toc-Decoders" href="#Decoders">7. Decoders</a></li>
+ <li><a name="toc-Video-Decoders" href="#Video-Decoders">8. Video Decoders</a>
+ <ul class="toc">
+ <li><a name="toc-rawvideo" href="#rawvideo">8.1 rawvideo</a>
+ <ul class="toc">
+ <li><a name="toc-Options-2" href="#Options-2">8.1.1 Options</a></li>
+ </ul>
+</li>
+ </ul></li>
+ <li><a name="toc-Encoders" href="#Encoders">9. Encoders</a></li>
+ <li><a name="toc-Audio-Encoders" href="#Audio-Encoders">10. Audio Encoders</a>
+ <ul class="toc">
+ <li><a name="toc-ac3-and-ac3_005ffixed" href="#ac3-and-ac3_005ffixed">10.1 ac3 and ac3_fixed</a>
+ <ul class="toc">
+ <li><a name="toc-AC_002d3-Metadata" href="#AC_002d3-Metadata">10.1.1 AC-3 Metadata</a>
+ <ul class="toc">
+ <li><a name="toc-Metadata-Control-Options" href="#Metadata-Control-Options">10.1.1.1 Metadata Control Options</a></li>
+ <li><a name="toc-Downmix-Levels" href="#Downmix-Levels">10.1.1.2 Downmix Levels</a></li>
+ <li><a name="toc-Audio-Production-Information" href="#Audio-Production-Information">10.1.1.3 Audio Production Information</a></li>
+ <li><a name="toc-Other-Metadata-Options" href="#Other-Metadata-Options">10.1.1.4 Other Metadata Options</a></li>
+ </ul></li>
+ <li><a name="toc-Extended-Bitstream-Information" href="#Extended-Bitstream-Information">10.1.2 Extended Bitstream Information</a>
+ <ul class="toc">
+ <li><a name="toc-Extended-Bitstream-Information-_002d-Part-1" href="#Extended-Bitstream-Information-_002d-Part-1">10.1.2.1 Extended Bitstream Information - Part 1</a></li>
+ <li><a name="toc-Extended-Bitstream-Information-_002d-Part-2" href="#Extended-Bitstream-Information-_002d-Part-2">10.1.2.2 Extended Bitstream Information - Part 2</a></li>
+ </ul></li>
+ <li><a name="toc-Other-AC_002d3-Encoding-Options" href="#Other-AC_002d3-Encoding-Options">10.1.3 Other AC-3 Encoding Options</a></li>
+ <li><a name="toc-Floating_002dPoint_002dOnly-AC_002d3-Encoding-Options" href="#Floating_002dPoint_002dOnly-AC_002d3-Encoding-Options">10.1.4 Floating-Point-Only AC-3 Encoding Options</a></li>
+ </ul>
+</li>
+ </ul></li>
+ <li><a name="toc-Video-Encoders" href="#Video-Encoders">11. Video Encoders</a>
+ <ul class="toc">
+ <li><a name="toc-libvpx" href="#libvpx">11.1 libvpx</a>
+ <ul class="toc">
+ <li><a name="toc-Options-3" href="#Options-3">11.1.1 Options</a></li>
+ </ul></li>
+ <li><a name="toc-libx264" href="#libx264">11.2 libx264</a>
+ <ul class="toc">
+ <li><a name="toc-Options-4" href="#Options-4">11.2.1 Options</a></li>
+ </ul>
+</li>
+ </ul></li>
+ <li><a name="toc-Demuxers" href="#Demuxers">12. Demuxers</a>
+ <ul class="toc">
+ <li><a name="toc-image2" href="#image2">12.1 image2</a></li>
+ <li><a name="toc-applehttp" href="#applehttp">12.2 applehttp</a></li>
+ </ul></li>
+ <li><a name="toc-Muxers" href="#Muxers">13. Muxers</a>
+ <ul class="toc">
+ <li><a name="toc-crc-1" href="#crc-1">13.1 crc</a></li>
+ <li><a name="toc-framecrc-1" href="#framecrc-1">13.2 framecrc</a></li>
+ <li><a name="toc-image2-1" href="#image2-1">13.3 image2</a></li>
+ <li><a name="toc-mpegts" href="#mpegts">13.4 mpegts</a></li>
+ <li><a name="toc-null" href="#null">13.5 null</a></li>
+ <li><a name="toc-matroska" href="#matroska">13.6 matroska</a></li>
+ </ul></li>
+ <li><a name="toc-Input-Devices" href="#Input-Devices">14. Input Devices</a>
+ <ul class="toc">
+ <li><a name="toc-alsa-1" href="#alsa-1">14.1 alsa</a></li>
+ <li><a name="toc-bktr" href="#bktr">14.2 bktr</a></li>
+ <li><a name="toc-dv1394" href="#dv1394">14.3 dv1394</a></li>
+ <li><a name="toc-fbdev" href="#fbdev">14.4 fbdev</a></li>
+ <li><a name="toc-jack" href="#jack">14.5 jack</a></li>
+ <li><a name="toc-libdc1394" href="#libdc1394">14.6 libdc1394</a></li>
+ <li><a name="toc-openal" href="#openal">14.7 openal</a>
+ <ul class="toc">
+ <li><a name="toc-Options-1" href="#Options-1">14.7.1 Options</a></li>
+ <li><a name="toc-Examples-2" href="#Examples-2">14.7.2 Examples</a></li>
+ </ul></li>
+ <li><a name="toc-oss" href="#oss">14.8 oss</a></li>
+ <li><a name="toc-sndio-1" href="#sndio-1">14.9 sndio</a></li>
+ <li><a name="toc-video4linux-and-video4linux2" href="#video4linux-and-video4linux2">14.10 video4linux and video4linux2</a></li>
+ <li><a name="toc-vfwcap" href="#vfwcap">14.11 vfwcap</a></li>
+ <li><a name="toc-x11grab" href="#x11grab">14.12 x11grab</a></li>
+ </ul></li>
+ <li><a name="toc-Output-Devices" href="#Output-Devices">15. Output Devices</a>
+ <ul class="toc">
+ <li><a name="toc-alsa" href="#alsa">15.1 alsa</a></li>
+ <li><a name="toc-oss-1" href="#oss-1">15.2 oss</a></li>
+ <li><a name="toc-sdl" href="#sdl">15.3 sdl</a>
+ <ul class="toc">
+ <li><a name="toc-Options" href="#Options">15.3.1 Options</a></li>
+ <li><a name="toc-Examples-1" href="#Examples-1">15.3.2 Examples</a></li>
+ </ul></li>
+ <li><a name="toc-sndio" href="#sndio">15.4 sndio</a></li>
+ </ul></li>
+ <li><a name="toc-Protocols" href="#Protocols">16. Protocols</a>
+ <ul class="toc">
+ <li><a name="toc-applehttp-1" href="#applehttp-1">16.1 applehttp</a></li>
+ <li><a name="toc-concat" href="#concat">16.2 concat</a></li>
+ <li><a name="toc-file" href="#file">16.3 file</a></li>
+ <li><a name="toc-gopher" href="#gopher">16.4 gopher</a></li>
+ <li><a name="toc-http" href="#http">16.5 http</a></li>
+ <li><a name="toc-mmst" href="#mmst">16.6 mmst</a></li>
+ <li><a name="toc-mmsh" href="#mmsh">16.7 mmsh</a></li>
+ <li><a name="toc-md5" href="#md5">16.8 md5</a></li>
+ <li><a name="toc-pipe" href="#pipe">16.9 pipe</a></li>
+ <li><a name="toc-rtmp" href="#rtmp">16.10 rtmp</a></li>
+ <li><a name="toc-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte" href="#rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte">16.11 rtmp, rtmpe, rtmps, rtmpt, rtmpte</a></li>
+ <li><a name="toc-rtp" href="#rtp">16.12 rtp</a></li>
+ <li><a name="toc-rtsp" href="#rtsp">16.13 rtsp</a></li>
+ <li><a name="toc-sap" href="#sap">16.14 sap</a>
+ <ul class="toc">
+ <li><a name="toc-Muxer" href="#Muxer">16.14.1 Muxer</a></li>
+ <li><a name="toc-Demuxer" href="#Demuxer">16.14.2 Demuxer</a></li>
+ </ul></li>
+ <li><a name="toc-tcp" href="#tcp">16.15 tcp</a></li>
+ <li><a name="toc-udp" href="#udp">16.16 udp</a></li>
+ </ul></li>
+ <li><a name="toc-Bitstream-Filters" href="#Bitstream-Filters">17. Bitstream Filters</a>
+ <ul class="toc">
+ <li><a name="toc-aac_005fadtstoasc" href="#aac_005fadtstoasc">17.1 aac_adtstoasc</a></li>
+ <li><a name="toc-chomp" href="#chomp">17.2 chomp</a></li>
+ <li><a name="toc-dump_005fextradata" href="#dump_005fextradata">17.3 dump_extradata</a></li>
+ <li><a name="toc-h264_005fmp4toannexb" href="#h264_005fmp4toannexb">17.4 h264_mp4toannexb</a></li>
+ <li><a name="toc-imx_005fdump_005fheader" href="#imx_005fdump_005fheader">17.5 imx_dump_header</a></li>
+ <li><a name="toc-mjpeg2jpeg" href="#mjpeg2jpeg">17.6 mjpeg2jpeg</a></li>
+ <li><a name="toc-mjpega_005fdump_005fheader" href="#mjpega_005fdump_005fheader">17.7 mjpega_dump_header</a></li>
+ <li><a name="toc-movsub" href="#movsub">17.8 movsub</a></li>
+ <li><a name="toc-mp3_005fheader_005fcompress" href="#mp3_005fheader_005fcompress">17.9 mp3_header_compress</a></li>
+ <li><a name="toc-mp3_005fheader_005fdecompress" href="#mp3_005fheader_005fdecompress">17.10 mp3_header_decompress</a></li>
+ <li><a name="toc-noise" href="#noise">17.11 noise</a></li>
+ <li><a name="toc-remove_005fextradata" href="#remove_005fextradata">17.12 remove_extradata</a></li>
+ </ul></li>
+ <li><a name="toc-Filtergraph-description" href="#Filtergraph-description">18. Filtergraph description</a>
+ <ul class="toc">
+ <li><a name="toc-Filtergraph-syntax" href="#Filtergraph-syntax">18.1 Filtergraph syntax</a></li>
+ </ul></li>
+ <li><a name="toc-Audio-Filters" href="#Audio-Filters">19. Audio Filters</a>
+ <ul class="toc">
+ <li><a name="toc-anull" href="#anull">19.1 anull</a></li>
+ </ul></li>
+ <li><a name="toc-Audio-Sources" href="#Audio-Sources">20. Audio Sources</a>
+ <ul class="toc">
+ <li><a name="toc-anullsrc" href="#anullsrc">20.1 anullsrc</a></li>
+ </ul></li>
+ <li><a name="toc-Audio-Sinks" href="#Audio-Sinks">21. Audio Sinks</a>
+ <ul class="toc">
+ <li><a name="toc-anullsink" href="#anullsink">21.1 anullsink</a></li>
+ </ul></li>
+ <li><a name="toc-Video-Filters" href="#Video-Filters">22. Video Filters</a>
+ <ul class="toc">
+ <li><a name="toc-blackframe" href="#blackframe">22.1 blackframe</a></li>
+ <li><a name="toc-boxblur" href="#boxblur">22.2 boxblur</a></li>
+ <li><a name="toc-copy" href="#copy">22.3 copy</a></li>
+ <li><a name="toc-crop" href="#crop">22.4 crop</a></li>
+ <li><a name="toc-cropdetect" href="#cropdetect">22.5 cropdetect</a></li>
+ <li><a name="toc-drawbox" href="#drawbox">22.6 drawbox</a></li>
+ <li><a name="toc-drawtext" href="#drawtext">22.7 drawtext</a></li>
+ <li><a name="toc-fade" href="#fade">22.8 fade</a></li>
+ <li><a name="toc-fieldorder" href="#fieldorder">22.9 fieldorder</a></li>
+ <li><a name="toc-fifo" href="#fifo">22.10 fifo</a></li>
+ <li><a name="toc-format" href="#format">22.11 format</a></li>
+ <li><a name="toc-frei0r-1" href="#frei0r-1">22.12 frei0r</a></li>
+ <li><a name="toc-gradfun" href="#gradfun">22.13 gradfun</a></li>
+ <li><a name="toc-hflip" href="#hflip">22.14 hflip</a></li>
+ <li><a name="toc-hqdn3d" href="#hqdn3d">22.15 hqdn3d</a></li>
+ <li><a name="toc-lut_002c-lutrgb_002c-lutyuv" href="#lut_002c-lutrgb_002c-lutyuv">22.16 lut, lutrgb, lutyuv</a></li>
+ <li><a name="toc-mp" href="#mp">22.17 mp</a></li>
+ <li><a name="toc-negate" href="#negate">22.18 negate</a></li>
+ <li><a name="toc-noformat" href="#noformat">22.19 noformat</a></li>
+ <li><a name="toc-null-1" href="#null-1">22.20 null</a></li>
+ <li><a name="toc-ocv" href="#ocv">22.21 ocv</a>
+ <ul class="toc">
+ <li><a name="toc-dilate-1" href="#dilate-1">22.21.1 dilate</a></li>
+ <li><a name="toc-erode" href="#erode">22.21.2 erode</a></li>
+ <li><a name="toc-smooth" href="#smooth">22.21.3 smooth</a></li>
+ </ul></li>
+ <li><a name="toc-overlay" href="#overlay">22.22 overlay</a></li>
+ <li><a name="toc-pad" href="#pad">22.23 pad</a></li>
+ <li><a name="toc-pixdesctest" href="#pixdesctest">22.24 pixdesctest</a></li>
+ <li><a name="toc-scale" href="#scale">22.25 scale</a></li>
+ <li><a name="toc-select" href="#select">22.26 select</a></li>
+ <li><a name="toc-setdar-1" href="#setdar-1">22.27 setdar</a></li>
+ <li><a name="toc-setpts" href="#setpts">22.28 setpts</a></li>
+ <li><a name="toc-setsar-1" href="#setsar-1">22.29 setsar</a></li>
+ <li><a name="toc-settb" href="#settb">22.30 settb</a></li>
+ <li><a name="toc-showinfo" href="#showinfo">22.31 showinfo</a></li>
+ <li><a name="toc-slicify" href="#slicify">22.32 slicify</a></li>
+ <li><a name="toc-split" href="#split">22.33 split</a></li>
+ <li><a name="toc-transpose" href="#transpose">22.34 transpose</a></li>
+ <li><a name="toc-unsharp" href="#unsharp">22.35 unsharp</a></li>
+ <li><a name="toc-vflip" href="#vflip">22.36 vflip</a></li>
+ <li><a name="toc-yadif" href="#yadif">22.37 yadif</a></li>
+ </ul></li>
+ <li><a name="toc-Video-Sources" href="#Video-Sources">23. Video Sources</a>
+ <ul class="toc">
+ <li><a name="toc-buffer" href="#buffer">23.1 buffer</a></li>
+ <li><a name="toc-color" href="#color">23.2 color</a></li>
+ <li><a name="toc-movie" href="#movie">23.3 movie</a></li>
+ <li><a name="toc-nullsrc" href="#nullsrc">23.4 nullsrc</a></li>
+ <li><a name="toc-frei0r_005fsrc" href="#frei0r_005fsrc">23.5 frei0r_src</a></li>
+ <li><a name="toc-rgbtestsrc_002c-testsrc" href="#rgbtestsrc_002c-testsrc">23.6 rgbtestsrc, testsrc</a></li>
+ </ul></li>
+ <li><a name="toc-Video-Sinks" href="#Video-Sinks">24. Video Sinks</a>
+ <ul class="toc">
+ <li><a name="toc-buffersink" href="#buffersink">24.1 buffersink</a></li>
+ <li><a name="toc-nullsink" href="#nullsink">24.2 nullsink</a></li>
+ </ul></li>
+ <li><a name="toc-Metadata" href="#Metadata">25. Metadata</a></li>
+</ul>
+</div>
+
+<hr size="1">
+<a name="Synopsis"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Synopsis">1. Synopsis</a></h1>
+
+<p>The generic syntax is:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg [[infile options][‘<samp>-i</samp>’ <var>infile</var>]]... {[outfile options] <var>outfile</var>}...
+</pre></td></tr></table>
+
+<a name="Description"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Description">2. Description</a></h1>
+
+<p>ffmpeg is a very fast video and audio converter that can also grab from
+a live audio/video source. It can also convert between arbitrary sample
+rates and resize video on the fly with a high quality polyphase filter.
+</p>
+<p>The command line interface is designed to be intuitive, in the sense
+that ffmpeg tries to figure out all parameters that can possibly be
+derived automatically. You usually only have to specify the target
+bitrate you want.
+</p>
+<p>As a general rule, options are applied to the next specified
+file. Therefore, order is important, and you can have the same
+option on the command line multiple times. Each occurrence is
+then applied to the next input or output file.
+</p>
+<ul>
+<li>
+To set the video bitrate of the output file to 64kbit/s:
+<table><tr><td> </td><td><pre class="example">ffmpeg -i input.avi -b 64k output.avi
+</pre></td></tr></table>
+
+</li><li>
+To force the frame rate of the output file to 24 fps:
+<table><tr><td> </td><td><pre class="example">ffmpeg -i input.avi -r 24 output.avi
+</pre></td></tr></table>
+
+</li><li>
+To force the frame rate of the input file (valid for raw formats only)
+to 1 fps and the frame rate of the output file to 24 fps:
+<table><tr><td> </td><td><pre class="example">ffmpeg -r 1 -i input.m2v -r 24 output.avi
+</pre></td></tr></table>
+</li></ul>
+
+<p>The format option may be needed for raw input files.
+</p>
+<p>By default ffmpeg tries to convert as losslessly as possible: It
+uses the same audio and video parameters for the outputs as the one
+specified for the inputs.
+</p>
+
+<a name="Options-5"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Options-5">3. Options</a></h1>
+
+<p>All the numerical options, if not specified otherwise, accept in input
+a string representing a number, which may contain one of the
+International System number postfixes, for example ’K’, ’M’, ’G’.
+If ’i’ is appended after the postfix, powers of 2 are used instead of
+powers of 10. The ’B’ postfix multiplies the value for 8, and can be
+appended after another postfix or used alone. This allows using for
+example ’KB’, ’MiB’, ’G’ and ’B’ as postfix.
+</p>
+<p>Options which do not take arguments are boolean options, and set the
+corresponding value to true. They can be set to false by prefixing
+with "no" the option name, for example using "-nofoo" in the
+commandline will set to false the boolean option with name "foo".
+</p>
+<a name="Generic-options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Generic-options">3.1 Generic options</a></h2>
+
+<p>These options are shared amongst the ff* tools.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>-L</samp>’</dt>
+<dd><p>Show license.
+</p>
+</dd>
+<dt> ‘<samp>-h, -?, -help, --help</samp>’</dt>
+<dd><p>Show help.
+</p>
+</dd>
+<dt> ‘<samp>-version</samp>’</dt>
+<dd><p>Show version.
+</p>
+</dd>
+<dt> ‘<samp>-formats</samp>’</dt>
+<dd><p>Show available formats.
+</p>
+<p>The fields preceding the format names have the following meanings:
+</p><dl compact="compact">
+<dt> ‘<samp>D</samp>’</dt>
+<dd><p>Decoding available
+</p></dd>
+<dt> ‘<samp>E</samp>’</dt>
+<dd><p>Encoding available
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-codecs</samp>’</dt>
+<dd><p>Show available codecs.
+</p>
+<p>The fields preceding the codec names have the following meanings:
+</p><dl compact="compact">
+<dt> ‘<samp>D</samp>’</dt>
+<dd><p>Decoding available
+</p></dd>
+<dt> ‘<samp>E</samp>’</dt>
+<dd><p>Encoding available
+</p></dd>
+<dt> ‘<samp>V/A/S</samp>’</dt>
+<dd><p>Video/audio/subtitle codec
+</p></dd>
+<dt> ‘<samp>S</samp>’</dt>
+<dd><p>Codec supports slices
+</p></dd>
+<dt> ‘<samp>D</samp>’</dt>
+<dd><p>Codec supports direct rendering
+</p></dd>
+<dt> ‘<samp>T</samp>’</dt>
+<dd><p>Codec can handle input truncated at random locations instead of only at frame boundaries
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-bsfs</samp>’</dt>
+<dd><p>Show available bitstream filters.
+</p>
+</dd>
+<dt> ‘<samp>-protocols</samp>’</dt>
+<dd><p>Show available protocols.
+</p>
+</dd>
+<dt> ‘<samp>-filters</samp>’</dt>
+<dd><p>Show available libavfilter filters.
+</p>
+</dd>
+<dt> ‘<samp>-pix_fmts</samp>’</dt>
+<dd><p>Show available pixel formats.
+</p>
+</dd>
+<dt> ‘<samp>-loglevel <var>loglevel</var></samp>’</dt>
+<dd><p>Set the logging level used by the library.
+<var>loglevel</var> is a number or a string containing one of the following values:
+</p><dl compact="compact">
+<dt> ‘<samp>quiet</samp>’</dt>
+<dt> ‘<samp>panic</samp>’</dt>
+<dt> ‘<samp>fatal</samp>’</dt>
+<dt> ‘<samp>error</samp>’</dt>
+<dt> ‘<samp>warning</samp>’</dt>
+<dt> ‘<samp>info</samp>’</dt>
+<dt> ‘<samp>verbose</samp>’</dt>
+<dt> ‘<samp>debug</samp>’</dt>
+</dl>
+
+<p>By default the program logs to stderr, if coloring is supported by the
+terminal, colors are used to mark errors and warnings. Log coloring
+can be disabled setting the environment variable
+<code>FFMPEG_FORCE_NOCOLOR</code> or <code>NO_COLOR</code>, or can be forced setting
+the environment variable <code>FFMPEG_FORCE_COLOR</code>.
+The use of the environment variable <code>NO_COLOR</code> is deprecated and
+will be dropped in a following FFmpeg version.
+</p>
+</dd>
+</dl>
+
+<a name="Main-options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Main-options">3.2 Main options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-f <var>fmt</var></samp>’</dt>
+<dd><p>Force format.
+</p>
+</dd>
+<dt> ‘<samp>-i <var>filename</var></samp>’</dt>
+<dd><p>input file name
+</p>
+</dd>
+<dt> ‘<samp>-y</samp>’</dt>
+<dd><p>Overwrite output files.
+</p>
+</dd>
+<dt> ‘<samp>-t <var>duration</var></samp>’</dt>
+<dd><p>Restrict the transcoded/captured video sequence
+to the duration specified in seconds.
+<code>hh:mm:ss[.xxx]</code> syntax is also supported.
+</p>
+</dd>
+<dt> ‘<samp>-fs <var>limit_size</var></samp>’</dt>
+<dd><p>Set the file size limit.
+</p>
+</dd>
+<dt> ‘<samp>-ss <var>position</var></samp>’</dt>
+<dd><p>Seek to given time position in seconds.
+<code>hh:mm:ss[.xxx]</code> syntax is also supported.
+</p>
+</dd>
+<dt> ‘<samp>-itsoffset <var>offset</var></samp>’</dt>
+<dd><p>Set the input time offset in seconds.
+<code>[-]hh:mm:ss[.xxx]</code> syntax is also supported.
+This option affects all the input files that follow it.
+The offset is added to the timestamps of the input files.
+Specifying a positive offset means that the corresponding
+streams are delayed by ’offset’ seconds.
+</p>
+</dd>
+<dt> ‘<samp>-timestamp <var>time</var></samp>’</dt>
+<dd><p>Set the recording timestamp in the container.
+The syntax for <var>time</var> is:
+</p><table><tr><td> </td><td><pre class="example">now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH[:MM[:SS[.m...]]])|(HH[MM[SS[.m...]]]))[Z|z])
+</pre></td></tr></table>
+<p>If the value is "now" it takes the current time.
+Time is local time unless ’Z’ or ’z’ is appended, in which case it is
+interpreted as UTC.
+If the year-month-day part is not specified it takes the current
+year-month-day.
+</p>
+</dd>
+<dt> ‘<samp>-metadata <var>key</var>=<var>value</var></samp>’</dt>
+<dd><p>Set a metadata key/value pair.
+</p>
+<p>For example, for setting the title in the output file:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.avi -metadata title="my title" out.flv
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>-v <var>number</var></samp>’</dt>
+<dd><p>Set the logging verbosity level.
+</p>
+</dd>
+<dt> ‘<samp>-target <var>type</var></samp>’</dt>
+<dd><p>Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd",
+"ntsc-svcd", ... ). All the format options (bitrate, codecs,
+buffer sizes) are then set automatically. You can just type:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
+</pre></td></tr></table>
+
+<p>Nevertheless you can specify additional options as long as you know
+they do not conflict with the standard, as in:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>-dframes <var>number</var></samp>’</dt>
+<dd><p>Set the number of data frames to record.
+</p>
+</dd>
+<dt> ‘<samp>-scodec <var>codec</var></samp>’</dt>
+<dd><p>Force subtitle codec (’copy’ to copy stream).
+</p>
+</dd>
+<dt> ‘<samp>-newsubtitle</samp>’</dt>
+<dd><p>Add a new subtitle stream to the current output stream.
+</p>
+</dd>
+<dt> ‘<samp>-slang <var>code</var></samp>’</dt>
+<dd><p>Set the ISO 639 language code (3 letters) of the current subtitle stream.
+</p>
+</dd>
+</dl>
+
+<a name="Video-Options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Video-Options">3.3 Video Options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-vframes <var>number</var></samp>’</dt>
+<dd><p>Set the number of video frames to record.
+</p></dd>
+<dt> ‘<samp>-r <var>fps</var></samp>’</dt>
+<dd><p>Set frame rate (Hz value, fraction or abbreviation), (default = 25).
+</p></dd>
+<dt> ‘<samp>-s <var>size</var></samp>’</dt>
+<dd><p>Set frame size. The format is ‘<samp>wxh</samp>’ (ffserver default = 160x128).
+There is no default for input streams,
+for output streams it is set by default to the size of the source stream.
+The following abbreviations are recognized:
+</p><dl compact="compact">
+<dt> ‘<samp>sqcif</samp>’</dt>
+<dd><p>128x96
+</p></dd>
+<dt> ‘<samp>qcif</samp>’</dt>
+<dd><p>176x144
+</p></dd>
+<dt> ‘<samp>cif</samp>’</dt>
+<dd><p>352x288
+</p></dd>
+<dt> ‘<samp>4cif</samp>’</dt>
+<dd><p>704x576
+</p></dd>
+<dt> ‘<samp>16cif</samp>’</dt>
+<dd><p>1408x1152
+</p></dd>
+<dt> ‘<samp>qqvga</samp>’</dt>
+<dd><p>160x120
+</p></dd>
+<dt> ‘<samp>qvga</samp>’</dt>
+<dd><p>320x240
+</p></dd>
+<dt> ‘<samp>vga</samp>’</dt>
+<dd><p>640x480
+</p></dd>
+<dt> ‘<samp>svga</samp>’</dt>
+<dd><p>800x600
+</p></dd>
+<dt> ‘<samp>xga</samp>’</dt>
+<dd><p>1024x768
+</p></dd>
+<dt> ‘<samp>uxga</samp>’</dt>
+<dd><p>1600x1200
+</p></dd>
+<dt> ‘<samp>qxga</samp>’</dt>
+<dd><p>2048x1536
+</p></dd>
+<dt> ‘<samp>sxga</samp>’</dt>
+<dd><p>1280x1024
+</p></dd>
+<dt> ‘<samp>qsxga</samp>’</dt>
+<dd><p>2560x2048
+</p></dd>
+<dt> ‘<samp>hsxga</samp>’</dt>
+<dd><p>5120x4096
+</p></dd>
+<dt> ‘<samp>wvga</samp>’</dt>
+<dd><p>852x480
+</p></dd>
+<dt> ‘<samp>wxga</samp>’</dt>
+<dd><p>1366x768
+</p></dd>
+<dt> ‘<samp>wsxga</samp>’</dt>
+<dd><p>1600x1024
+</p></dd>
+<dt> ‘<samp>wuxga</samp>’</dt>
+<dd><p>1920x1200
+</p></dd>
+<dt> ‘<samp>woxga</samp>’</dt>
+<dd><p>2560x1600
+</p></dd>
+<dt> ‘<samp>wqsxga</samp>’</dt>
+<dd><p>3200x2048
+</p></dd>
+<dt> ‘<samp>wquxga</samp>’</dt>
+<dd><p>3840x2400
+</p></dd>
+<dt> ‘<samp>whsxga</samp>’</dt>
+<dd><p>6400x4096
+</p></dd>
+<dt> ‘<samp>whuxga</samp>’</dt>
+<dd><p>7680x4800
+</p></dd>
+<dt> ‘<samp>cga</samp>’</dt>
+<dd><p>320x200
+</p></dd>
+<dt> ‘<samp>ega</samp>’</dt>
+<dd><p>640x350
+</p></dd>
+<dt> ‘<samp>hd480</samp>’</dt>
+<dd><p>852x480
+</p></dd>
+<dt> ‘<samp>hd720</samp>’</dt>
+<dd><p>1280x720
+</p></dd>
+<dt> ‘<samp>hd1080</samp>’</dt>
+<dd><p>1920x1080
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-aspect <var>aspect</var></samp>’</dt>
+<dd><p>Set the video display aspect ratio specified by <var>aspect</var>.
+</p>
+<p><var>aspect</var> can be a floating point number string, or a string of the
+form <var>num</var>:<var>den</var>, where <var>num</var> and <var>den</var> are the
+numerator and denominator of the aspect ratio. For example "4:3",
+"16:9", "1.3333", and "1.7777" are valid argument values.
+</p>
+</dd>
+<dt> ‘<samp>-croptop <var>size</var></samp>’</dt>
+<dt> ‘<samp>-cropbottom <var>size</var></samp>’</dt>
+<dt> ‘<samp>-cropleft <var>size</var></samp>’</dt>
+<dt> ‘<samp>-cropright <var>size</var></samp>’</dt>
+<dd><p>All the crop options have been removed. Use -vf
+crop=width:height:x:y instead.
+</p>
+</dd>
+<dt> ‘<samp>-padtop <var>size</var></samp>’</dt>
+<dt> ‘<samp>-padbottom <var>size</var></samp>’</dt>
+<dt> ‘<samp>-padleft <var>size</var></samp>’</dt>
+<dt> ‘<samp>-padright <var>size</var></samp>’</dt>
+<dt> ‘<samp>-padcolor <var>hex_color</var></samp>’</dt>
+<dd><p>All the pad options have been removed. Use -vf
+pad=width:height:x:y:color instead.
+</p></dd>
+<dt> ‘<samp>-vn</samp>’</dt>
+<dd><p>Disable video recording.
+</p></dd>
+<dt> ‘<samp>-bt <var>tolerance</var></samp>’</dt>
+<dd><p>Set video bitrate tolerance (in bits, default 4000k).
+Has a minimum value of: (target_bitrate/target_framerate).
+In 1-pass mode, bitrate tolerance specifies how far ratecontrol is
+willing to deviate from the target average bitrate value. This is
+not related to min/max bitrate. Lowering tolerance too much has
+an adverse effect on quality.
+</p></dd>
+<dt> ‘<samp>-maxrate <var>bitrate</var></samp>’</dt>
+<dd><p>Set max video bitrate (in bit/s).
+Requires -bufsize to be set.
+</p></dd>
+<dt> ‘<samp>-minrate <var>bitrate</var></samp>’</dt>
+<dd><p>Set min video bitrate (in bit/s).
+Most useful in setting up a CBR encode:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
+</pre></td></tr></table>
+<p>It is of little use elsewise.
+</p></dd>
+<dt> ‘<samp>-bufsize <var>size</var></samp>’</dt>
+<dd><p>Set video buffer verifier buffer size (in bits).
+</p></dd>
+<dt> ‘<samp>-vcodec <var>codec</var></samp>’</dt>
+<dd><p>Force video codec to <var>codec</var>. Use the <code>copy</code> special value to
+tell that the raw codec data must be copied as is.
+</p></dd>
+<dt> ‘<samp>-sameq</samp>’</dt>
+<dd><p>Use same quantizer as source (implies VBR).
+</p>
+</dd>
+<dt> ‘<samp>-pass <var>n</var></samp>’</dt>
+<dd><p>Select the pass number (1 or 2). It is used to do two-pass
+video encoding. The statistics of the video are recorded in the first
+pass into a log file (see also the option -passlogfile),
+and in the second pass that log file is used to generate the video
+at the exact requested bitrate.
+On pass 1, you may just deactivate audio and set output to null,
+examples for Windows and Unix:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y NUL
+ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y /dev/null
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>-passlogfile <var>prefix</var></samp>’</dt>
+<dd><p>Set two-pass log file name prefix to <var>prefix</var>, the default file name
+prefix is “ffmpeg2pass”. The complete file name will be
+‘<tt>PREFIX-N.log</tt>’, where N is a number specific to the output
+stream.
+</p>
+</dd>
+<dt> ‘<samp>-newvideo</samp>’</dt>
+<dd><p>Add a new video stream to the current output stream.
+</p>
+</dd>
+<dt> ‘<samp>-vlang <var>code</var></samp>’</dt>
+<dd><p>Set the ISO 639 language code (3 letters) of the current video stream.
+</p>
+</dd>
+<dt> ‘<samp>-vf <var>filter_graph</var></samp>’</dt>
+<dd><p><var>filter_graph</var> is a description of the filter graph to apply to
+the input video.
+Use the option "-filters" to show all the available filters (including
+also sources and sinks).
+</p>
+</dd>
+</dl>
+
+<a name="Advanced-Video-Options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Advanced-Video-Options">3.4 Advanced Video Options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-pix_fmt <var>format</var></samp>’</dt>
+<dd><p>Set pixel format. Use ’list’ as parameter to show all the supported
+pixel formats.
+</p></dd>
+<dt> ‘<samp>-sws_flags <var>flags</var></samp>’</dt>
+<dd><p>Set SwScaler flags.
+</p></dd>
+<dt> ‘<samp>-g <var>gop_size</var></samp>’</dt>
+<dd><p>Set the group of pictures size.
+</p></dd>
+<dt> ‘<samp>-intra</samp>’</dt>
+<dd><p>Use only intra frames.
+</p></dd>
+<dt> ‘<samp>-vdt <var>n</var></samp>’</dt>
+<dd><p>Discard threshold.
+</p></dd>
+<dt> ‘<samp>-qscale <var>q</var></samp>’</dt>
+<dd><p>Use fixed video quantizer scale (VBR).
+</p></dd>
+<dt> ‘<samp>-qmin <var>q</var></samp>’</dt>
+<dd><p>minimum video quantizer scale (VBR)
+</p></dd>
+<dt> ‘<samp>-qmax <var>q</var></samp>’</dt>
+<dd><p>maximum video quantizer scale (VBR)
+</p></dd>
+<dt> ‘<samp>-qdiff <var>q</var></samp>’</dt>
+<dd><p>maximum difference between the quantizer scales (VBR)
+</p></dd>
+<dt> ‘<samp>-qblur <var>blur</var></samp>’</dt>
+<dd><p>video quantizer scale blur (VBR) (range 0.0 - 1.0)
+</p></dd>
+<dt> ‘<samp>-qcomp <var>compression</var></samp>’</dt>
+<dd><p>video quantizer scale compression (VBR) (default 0.5).
+Constant of ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0
+</p>
+</dd>
+<dt> ‘<samp>-lmin <var>lambda</var></samp>’</dt>
+<dd><p>minimum video lagrange factor (VBR)
+</p></dd>
+<dt> ‘<samp>-lmax <var>lambda</var></samp>’</dt>
+<dd><p>max video lagrange factor (VBR)
+</p></dd>
+<dt> ‘<samp>-mblmin <var>lambda</var></samp>’</dt>
+<dd><p>minimum macroblock quantizer scale (VBR)
+</p></dd>
+<dt> ‘<samp>-mblmax <var>lambda</var></samp>’</dt>
+<dd><p>maximum macroblock quantizer scale (VBR)
+</p>
+<p>These four options (lmin, lmax, mblmin, mblmax) use ’lambda’ units,
+but you may use the QP2LAMBDA constant to easily convert from ’q’ units:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>-rc_init_cplx <var>complexity</var></samp>’</dt>
+<dd><p>initial complexity for single pass encoding
+</p></dd>
+<dt> ‘<samp>-b_qfactor <var>factor</var></samp>’</dt>
+<dd><p>qp factor between P- and B-frames
+</p></dd>
+<dt> ‘<samp>-i_qfactor <var>factor</var></samp>’</dt>
+<dd><p>qp factor between P- and I-frames
+</p></dd>
+<dt> ‘<samp>-b_qoffset <var>offset</var></samp>’</dt>
+<dd><p>qp offset between P- and B-frames
+</p></dd>
+<dt> ‘<samp>-i_qoffset <var>offset</var></samp>’</dt>
+<dd><p>qp offset between P- and I-frames
+</p></dd>
+<dt> ‘<samp>-rc_eq <var>equation</var></samp>’</dt>
+<dd><p>Set rate control equation (see section "Expression Evaluation")
+(default = <code>tex^qComp</code>).
+</p>
+<p>When computing the rate control equation expression, besides the
+standard functions defined in the section "Expression Evaluation", the
+following functions are available:
+</p><dl compact="compact">
+<dt> <var>bits2qp(bits)</var></dt>
+<dt> <var>qp2bits(qp)</var></dt>
+</dl>
+
+<p>and the following constants are available:
+</p><dl compact="compact">
+<dt> <var>iTex</var></dt>
+<dt> <var>pTex</var></dt>
+<dt> <var>tex</var></dt>
+<dt> <var>mv</var></dt>
+<dt> <var>fCode</var></dt>
+<dt> <var>iCount</var></dt>
+<dt> <var>mcVar</var></dt>
+<dt> <var>var</var></dt>
+<dt> <var>isI</var></dt>
+<dt> <var>isP</var></dt>
+<dt> <var>isB</var></dt>
+<dt> <var>avgQP</var></dt>
+<dt> <var>qComp</var></dt>
+<dt> <var>avgIITex</var></dt>
+<dt> <var>avgPITex</var></dt>
+<dt> <var>avgPPTex</var></dt>
+<dt> <var>avgBPTex</var></dt>
+<dt> <var>avgTex</var></dt>
+</dl>
+
+</dd>
+<dt> ‘<samp>-rc_override <var>override</var></samp>’</dt>
+<dd><p>Rate control override for specific intervals, formated as "int,int,int"
+list separated with slashes. Two first values are the beginning and
+end frame numbers, last one is quantizer to use if positive, or quality
+factor if negative.
+</p></dd>
+<dt> ‘<samp>-me_method <var>method</var></samp>’</dt>
+<dd><p>Set motion estimation method to <var>method</var>.
+Available methods are (from lowest to best quality):
+</p><dl compact="compact">
+<dt> ‘<samp>zero</samp>’</dt>
+<dd><p>Try just the (0, 0) vector.
+</p></dd>
+<dt> ‘<samp>phods</samp>’</dt>
+<dt> ‘<samp>log</samp>’</dt>
+<dt> ‘<samp>x1</samp>’</dt>
+<dt> ‘<samp>hex</samp>’</dt>
+<dt> ‘<samp>umh</samp>’</dt>
+<dt> ‘<samp>epzs</samp>’</dt>
+<dd><p>(default method)
+</p></dd>
+<dt> ‘<samp>full</samp>’</dt>
+<dd><p>exhaustive search (slow and marginally better than epzs)
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-dct_algo <var>algo</var></samp>’</dt>
+<dd><p>Set DCT algorithm to <var>algo</var>. Available values are:
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>FF_DCT_AUTO (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>FF_DCT_FASTINT
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>FF_DCT_INT
+</p></dd>
+<dt> ‘<samp>3</samp>’</dt>
+<dd><p>FF_DCT_MMX
+</p></dd>
+<dt> ‘<samp>4</samp>’</dt>
+<dd><p>FF_DCT_MLIB
+</p></dd>
+<dt> ‘<samp>5</samp>’</dt>
+<dd><p>FF_DCT_ALTIVEC
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-idct_algo <var>algo</var></samp>’</dt>
+<dd><p>Set IDCT algorithm to <var>algo</var>. Available values are:
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>FF_IDCT_AUTO (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>FF_IDCT_INT
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>FF_IDCT_SIMPLE
+</p></dd>
+<dt> ‘<samp>3</samp>’</dt>
+<dd><p>FF_IDCT_SIMPLEMMX
+</p></dd>
+<dt> ‘<samp>4</samp>’</dt>
+<dd><p>FF_IDCT_LIBMPEG2MMX
+</p></dd>
+<dt> ‘<samp>5</samp>’</dt>
+<dd><p>FF_IDCT_PS2
+</p></dd>
+<dt> ‘<samp>6</samp>’</dt>
+<dd><p>FF_IDCT_MLIB
+</p></dd>
+<dt> ‘<samp>7</samp>’</dt>
+<dd><p>FF_IDCT_ARM
+</p></dd>
+<dt> ‘<samp>8</samp>’</dt>
+<dd><p>FF_IDCT_ALTIVEC
+</p></dd>
+<dt> ‘<samp>9</samp>’</dt>
+<dd><p>FF_IDCT_SH4
+</p></dd>
+<dt> ‘<samp>10</samp>’</dt>
+<dd><p>FF_IDCT_SIMPLEARM
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-er <var>n</var></samp>’</dt>
+<dd><p>Set error resilience to <var>n</var>.
+</p><dl compact="compact">
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>FF_ER_CAREFUL (default)
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>FF_ER_COMPLIANT
+</p></dd>
+<dt> ‘<samp>3</samp>’</dt>
+<dd><p>FF_ER_AGGRESSIVE
+</p></dd>
+<dt> ‘<samp>4</samp>’</dt>
+<dd><p>FF_ER_VERY_AGGRESSIVE
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-ec <var>bit_mask</var></samp>’</dt>
+<dd><p>Set error concealment to <var>bit_mask</var>. <var>bit_mask</var> is a bit mask of
+the following values:
+</p><dl compact="compact">
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>FF_EC_GUESS_MVS (default = enabled)
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>FF_EC_DEBLOCK (default = enabled)
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-bf <var>frames</var></samp>’</dt>
+<dd><p>Use ’frames’ B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).
+</p></dd>
+<dt> ‘<samp>-mbd <var>mode</var></samp>’</dt>
+<dd><p>macroblock decision
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in ffmpeg).
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>FF_MB_DECISION_RD: rate distortion
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-4mv</samp>’</dt>
+<dd><p>Use four motion vector by macroblock (MPEG-4 only).
+</p></dd>
+<dt> ‘<samp>-part</samp>’</dt>
+<dd><p>Use data partitioning (MPEG-4 only).
+</p></dd>
+<dt> ‘<samp>-bug <var>param</var></samp>’</dt>
+<dd><p>Work around encoder bugs that are not auto-detected.
+</p></dd>
+<dt> ‘<samp>-strict <var>strictness</var></samp>’</dt>
+<dd><p>How strictly to follow the standards.
+</p></dd>
+<dt> ‘<samp>-aic</samp>’</dt>
+<dd><p>Enable Advanced intra coding (h263+).
+</p></dd>
+<dt> ‘<samp>-umv</samp>’</dt>
+<dd><p>Enable Unlimited Motion Vector (h263+)
+</p>
+</dd>
+<dt> ‘<samp>-deinterlace</samp>’</dt>
+<dd><p>Deinterlace pictures.
+</p></dd>
+<dt> ‘<samp>-ilme</samp>’</dt>
+<dd><p>Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
+Use this option if your input file is interlaced and you want
+to keep the interlaced format for minimum losses.
+The alternative is to deinterlace the input stream with
+‘<samp>-deinterlace</samp>’, but deinterlacing introduces losses.
+</p></dd>
+<dt> ‘<samp>-psnr</samp>’</dt>
+<dd><p>Calculate PSNR of compressed frames.
+</p></dd>
+<dt> ‘<samp>-vstats</samp>’</dt>
+<dd><p>Dump video coding statistics to ‘<tt>vstats_HHMMSS.log</tt>’.
+</p></dd>
+<dt> ‘<samp>-vstats_file <var>file</var></samp>’</dt>
+<dd><p>Dump video coding statistics to <var>file</var>.
+</p></dd>
+<dt> ‘<samp>-top <var>n</var></samp>’</dt>
+<dd><p>top=1/bottom=0/auto=-1 field first
+</p></dd>
+<dt> ‘<samp>-dc <var>precision</var></samp>’</dt>
+<dd><p>Intra_dc_precision.
+</p></dd>
+<dt> ‘<samp>-vtag <var>fourcc/tag</var></samp>’</dt>
+<dd><p>Force video tag/fourcc.
+</p></dd>
+<dt> ‘<samp>-qphist</samp>’</dt>
+<dd><p>Show QP histogram.
+</p></dd>
+<dt> ‘<samp>-vbsf <var>bitstream_filter</var></samp>’</dt>
+<dd><p>Bitstream filters available are "dump_extra", "remove_extra", "noise", "h264_mp4toannexb", "imxdump", "mjpegadump", "mjpeg2jpeg".
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i h264.mp4 -vcodec copy -vbsf h264_mp4toannexb -an out.h264
+</pre></td></tr></table>
+</dd>
+<dt> ‘<samp>-force_key_frames <var>time</var>[,<var>time</var>...]</samp>’</dt>
+<dd><p>Force key frames at the specified timestamps, more precisely at the first
+frames after each specified time.
+This option can be useful to ensure that a seek point is present at a
+chapter mark or any other designated place in the output file.
+The timestamps must be specified in ascending order.
+</p></dd>
+</dl>
+
+<a name="Audio-Options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Audio-Options">3.5 Audio Options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-aframes <var>number</var></samp>’</dt>
+<dd><p>Set the number of audio frames to record.
+</p></dd>
+<dt> ‘<samp>-ar <var>freq</var></samp>’</dt>
+<dd><p>Set the audio sampling frequency. For output streams it is set by
+default to the frequency of the corresponding input stream. For input
+streams this option only makes sense for audio grabbing devices and raw
+demuxers and is mapped to the corresponding demuxer options.
+</p></dd>
+<dt> ‘<samp>-aq <var>q</var></samp>’</dt>
+<dd><p>Set the audio quality (codec-specific, VBR).
+</p></dd>
+<dt> ‘<samp>-ac <var>channels</var></samp>’</dt>
+<dd><p>Set the number of audio channels. For output streams it is set by
+default to the number of input audio channels. For input streams
+this option only makes sense for audio grabbing devices and raw demuxers
+and is mapped to the corresponding demuxer options.
+</p></dd>
+<dt> ‘<samp>-an</samp>’</dt>
+<dd><p>Disable audio recording.
+</p></dd>
+<dt> ‘<samp>-acodec <var>codec</var></samp>’</dt>
+<dd><p>Force audio codec to <var>codec</var>. Use the <code>copy</code> special value to
+specify that the raw codec data must be copied as is.
+</p></dd>
+<dt> ‘<samp>-newaudio</samp>’</dt>
+<dd><p>Add a new audio track to the output file. If you want to specify parameters,
+do so before <code>-newaudio</code> (<code>-acodec</code>, <code>-ab</code>, etc..).
+</p>
+<p>Mapping will be done automatically, if the number of output streams is equal to
+the number of input streams, else it will pick the first one that matches. You
+can override the mapping using <code>-map</code> as usual.
+</p>
+<p>Example:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384k test.mpg -acodec mp2 -ab 192k -newaudio
+</pre></td></tr></table>
+</dd>
+<dt> ‘<samp>-alang <var>code</var></samp>’</dt>
+<dd><p>Set the ISO 639 language code (3 letters) of the current audio stream.
+</p></dd>
+</dl>
+
+<a name="Advanced-Audio-options_003a"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Advanced-Audio-options_003a">3.6 Advanced Audio options:</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-atag <var>fourcc/tag</var></samp>’</dt>
+<dd><p>Force audio tag/fourcc.
+</p></dd>
+<dt> ‘<samp>-audio_service_type <var>type</var></samp>’</dt>
+<dd><p>Set the type of service that the audio stream contains.
+</p><dl compact="compact">
+<dt> ‘<samp>ma</samp>’</dt>
+<dd><p>Main Audio Service (default)
+</p></dd>
+<dt> ‘<samp>ef</samp>’</dt>
+<dd><p>Effects
+</p></dd>
+<dt> ‘<samp>vi</samp>’</dt>
+<dd><p>Visually Impaired
+</p></dd>
+<dt> ‘<samp>hi</samp>’</dt>
+<dd><p>Hearing Impaired
+</p></dd>
+<dt> ‘<samp>di</samp>’</dt>
+<dd><p>Dialogue
+</p></dd>
+<dt> ‘<samp>co</samp>’</dt>
+<dd><p>Commentary
+</p></dd>
+<dt> ‘<samp>em</samp>’</dt>
+<dd><p>Emergency
+</p></dd>
+<dt> ‘<samp>vo</samp>’</dt>
+<dd><p>Voice Over
+</p></dd>
+<dt> ‘<samp>ka</samp>’</dt>
+<dd><p>Karaoke
+</p></dd>
+</dl>
+</dd>
+<dt> ‘<samp>-absf <var>bitstream_filter</var></samp>’</dt>
+<dd><p>Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".
+</p></dd>
+</dl>
+
+<a name="Subtitle-options_003a"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Subtitle-options_003a">3.7 Subtitle options:</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-scodec <var>codec</var></samp>’</dt>
+<dd><p>Force subtitle codec (’copy’ to copy stream).
+</p></dd>
+<dt> ‘<samp>-newsubtitle</samp>’</dt>
+<dd><p>Add a new subtitle stream to the current output stream.
+</p></dd>
+<dt> ‘<samp>-slang <var>code</var></samp>’</dt>
+<dd><p>Set the ISO 639 language code (3 letters) of the current subtitle stream.
+</p></dd>
+<dt> ‘<samp>-sn</samp>’</dt>
+<dd><p>Disable subtitle recording.
+</p></dd>
+<dt> ‘<samp>-sbsf <var>bitstream_filter</var></samp>’</dt>
+<dd><p>Bitstream filters available are "mov2textsub", "text2movsub".
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i file.mov -an -vn -sbsf mov2textsub -scodec copy -f rawvideo sub.txt
+</pre></td></tr></table>
+</dd>
+</dl>
+
+<a name="Audio_002fVideo-grab-options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Audio_002fVideo-grab-options">3.8 Audio/Video grab options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-vc <var>channel</var></samp>’</dt>
+<dd><p>Set video grab channel (DV1394 only).
+</p></dd>
+<dt> ‘<samp>-tvstd <var>standard</var></samp>’</dt>
+<dd><p>Set television standard (NTSC, PAL (SECAM)).
+</p></dd>
+<dt> ‘<samp>-isync</samp>’</dt>
+<dd><p>Synchronize read on input.
+</p></dd>
+</dl>
+
+<a name="Advanced-options"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Advanced-options">3.9 Advanced options</a></h2>
+
+<dl compact="compact">
+<dt> ‘<samp>-map <var>input_file_id</var>.<var>input_stream_id</var>[:<var>sync_file_id</var>.<var>sync_stream_id</var>]</samp>’</dt>
+<dd>
+<p>Designate an input stream as a source for the output file. Each input
+stream is identified by the input file index <var>input_file_id</var> and
+the input stream index <var>input_stream_id</var> within the input
+file. Both indexes start at 0. If specified,
+<var>sync_file_id</var>.<var>sync_stream_id</var> sets which input stream
+is used as a presentation sync reference.
+</p>
+<p>The <code>-map</code> options must be specified just after the output file.
+If any <code>-map</code> options are used, the number of <code>-map</code> options
+on the command line must match the number of streams in the output
+file. The first <code>-map</code> option on the command line specifies the
+source for output stream 0, the second <code>-map</code> option specifies
+the source for output stream 1, etc.
+</p>
+<p>For example, if you have two audio streams in the first input file,
+these streams are identified by "0.0" and "0.1". You can use
+<code>-map</code> to select which stream to place in an output file. For
+example:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT out.wav -map 0.1
+</pre></td></tr></table>
+<p>will map the input stream in ‘<tt>INPUT</tt>’ identified by "0.1" to
+the (single) output stream in ‘<tt>out.wav</tt>’.
+</p>
+<p>For example, to select the stream with index 2 from input file
+‘<tt>a.mov</tt>’ (specified by the identifier "0.2"), and stream with
+index 6 from input ‘<tt>b.mov</tt>’ (specified by the identifier "1.6"),
+and copy them to the output file ‘<tt>out.mov</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i a.mov -i b.mov -vcodec copy -acodec copy out.mov -map 0.2 -map 1.6
+</pre></td></tr></table>
+
+<p>To add more streams to the output file, you can use the
+<code>-newaudio</code>, <code>-newvideo</code>, <code>-newsubtitle</code> options.
+</p>
+</dd>
+<dt> ‘<samp>-map_meta_data <var>outfile</var>[,<var>metadata</var>]:<var>infile</var>[,<var>metadata</var>]</samp>’</dt>
+<dd><p>Deprecated, use <var>-map_metadata</var> instead.
+</p>
+</dd>
+<dt> ‘<samp>-map_metadata <var>outfile</var>[,<var>metadata</var>]:<var>infile</var>[,<var>metadata</var>]</samp>’</dt>
+<dd><p>Set metadata information of <var>outfile</var> from <var>infile</var>. Note that those
+are file indices (zero-based), not filenames.
+Optional <var>metadata</var> parameters specify, which metadata to copy - (g)lobal
+(i.e. metadata that applies to the whole file), per-(s)tream, per-(c)hapter or
+per-(p)rogram. All metadata specifiers other than global must be followed by the
+stream/chapter/program number. If metadata specifier is omitted, it defaults to
+global.
+</p>
+<p>By default, global metadata is copied from the first input file to all output files,
+per-stream and per-chapter metadata is copied along with streams/chapters. These
+default mappings are disabled by creating any mapping of the relevant type. A negative
+file index can be used to create a dummy mapping that just disables automatic copying.
+</p>
+<p>For example to copy metadata from the first stream of the input file to global metadata
+of the output file:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.ogg -map_metadata 0:0,s0 out.mp3
+</pre></td></tr></table>
+</dd>
+<dt> ‘<samp>-map_chapters <var>outfile</var>:<var>infile</var></samp>’</dt>
+<dd><p>Copy chapters from <var>infile</var> to <var>outfile</var>. If no chapter mapping is specified,
+then chapters are copied from the first input file with at least one chapter to all
+output files. Use a negative file index to disable any chapter copying.
+</p></dd>
+<dt> ‘<samp>-debug</samp>’</dt>
+<dd><p>Print specific debug info.
+</p></dd>
+<dt> ‘<samp>-benchmark</samp>’</dt>
+<dd><p>Show benchmarking information at the end of an encode.
+Shows CPU time used and maximum memory consumption.
+Maximum memory consumption is not supported on all systems,
+it will usually display as 0 if not supported.
+</p></dd>
+<dt> ‘<samp>-dump</samp>’</dt>
+<dd><p>Dump each input packet.
+</p></dd>
+<dt> ‘<samp>-hex</samp>’</dt>
+<dd><p>When dumping packets, also dump the payload.
+</p></dd>
+<dt> ‘<samp>-bitexact</samp>’</dt>
+<dd><p>Only use bit exact algorithms (for codec testing).
+</p></dd>
+<dt> ‘<samp>-ps <var>size</var></samp>’</dt>
+<dd><p>Set RTP payload size in bytes.
+</p></dd>
+<dt> ‘<samp>-re</samp>’</dt>
+<dd><p>Read input at native frame rate. Mainly used to simulate a grab device.
+</p></dd>
+<dt> ‘<samp>-loop_input</samp>’</dt>
+<dd><p>Loop over the input stream. Currently it works only for image
+streams. This option is used for automatic FFserver testing.
+This option is deprecated, use -loop.
+</p></dd>
+<dt> ‘<samp>-loop_output <var>number_of_times</var></samp>’</dt>
+<dd><p>Repeatedly loop output for formats that support looping such as animated GIF
+(0 will loop the output infinitely).
+This option is deprecated, use -loop.
+</p></dd>
+<dt> ‘<samp>-threads <var>count</var></samp>’</dt>
+<dd><p>Thread count.
+</p></dd>
+<dt> ‘<samp>-vsync <var>parameter</var></samp>’</dt>
+<dd><p>Video sync method.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>Each frame is passed with its timestamp from the demuxer to the muxer.
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>Frames will be duplicated and dropped to achieve exactly the requested
+constant framerate.
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>Frames are passed through with their timestamp or dropped so as to
+prevent 2 frames from having the same timestamp.
+</p></dd>
+<dt> ‘<samp>-1</samp>’</dt>
+<dd><p>Chooses between 1 and 2 depending on muxer capabilities. This is the
+default method.
+</p></dd>
+</dl>
+
+<p>With -map you can select from which stream the timestamps should be
+taken. You can leave either video or audio unchanged and sync the
+remaining stream(s) to the unchanged one.
+</p>
+</dd>
+<dt> ‘<samp>-async <var>samples_per_second</var></samp>’</dt>
+<dd><p>Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
+the parameter is the maximum samples per second by which the audio is changed.
+-async 1 is a special case where only the start of the audio stream is corrected
+without any later correction.
+</p></dd>
+<dt> ‘<samp>-copyts</samp>’</dt>
+<dd><p>Copy timestamps from input to output.
+</p></dd>
+<dt> ‘<samp>-copytb</samp>’</dt>
+<dd><p>Copy input stream time base from input to output when stream copying.
+</p></dd>
+<dt> ‘<samp>-shortest</samp>’</dt>
+<dd><p>Finish encoding when the shortest input stream ends.
+</p></dd>
+<dt> ‘<samp>-dts_delta_threshold</samp>’</dt>
+<dd><p>Timestamp discontinuity delta threshold.
+</p></dd>
+<dt> ‘<samp>-muxdelay <var>seconds</var></samp>’</dt>
+<dd><p>Set the maximum demux-decode delay.
+</p></dd>
+<dt> ‘<samp>-muxpreload <var>seconds</var></samp>’</dt>
+<dd><p>Set the initial demux-decode delay.
+</p></dd>
+<dt> ‘<samp>-streamid <var>output-stream-index</var>:<var>new-value</var></samp>’</dt>
+<dd><p>Assign a new stream-id value to an output stream. This option should be
+specified prior to the output filename to which it applies.
+For the situation where multiple output files exist, a streamid
+may be reassigned to a different value.
+</p>
+<p>For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
+an output mpegts file:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
+</pre></td></tr></table>
+</dd>
+</dl>
+
+<a name="Preset-files"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Preset-files">3.10 Preset files</a></h2>
+
+<p>A preset file contains a sequence of <var>option</var>=<var>value</var> pairs,
+one for each line, specifying a sequence of options which would be
+awkward to specify on the command line. Lines starting with the hash
+(’#’) character are ignored and are used to provide comments. Check
+the ‘<tt>ffpresets</tt>’ directory in the FFmpeg source tree for examples.
+</p>
+<p>Preset files are specified with the <code>vpre</code>, <code>apre</code>,
+<code>spre</code>, and <code>fpre</code> options. The <code>fpre</code> option takes the
+filename of the preset instead of a preset name as input and can be
+used for any kind of codec. For the <code>vpre</code>, <code>apre</code>, and
+<code>spre</code> options, the options specified in a preset file are
+applied to the currently selected codec of the same type as the preset
+option.
+</p>
+<p>The argument passed to the <code>vpre</code>, <code>apre</code>, and <code>spre</code>
+preset options identifies the preset file to use according to the
+following rules:
+</p>
+<p>First ffmpeg searches for a file named <var>arg</var>.ffpreset in the
+directories ‘<tt>$FFMPEG_DATADIR</tt>’ (if set), and ‘<tt>$HOME/.ffmpeg</tt>’, and in
+the datadir defined at configuration time (usually ‘<tt>PREFIX/share/ffmpeg</tt>’)
+or in a ‘<tt>ffpresets</tt>’ folder along the executable on win32,
+in that order. For example, if the argument is <code>libx264-max</code>, it will
+search for the file ‘<tt>libx264-max.ffpreset</tt>’.
+</p>
+<p>If no such file is found, then ffmpeg will search for a file named
+<var>codec_name</var>-<var>arg</var>.ffpreset in the above-mentioned
+directories, where <var>codec_name</var> is the name of the codec to which
+the preset file options will be applied. For example, if you select
+the video codec with <code>-vcodec libx264</code> and use <code>-vpre max</code>,
+then it will search for the file ‘<tt>libx264-max.ffpreset</tt>’.
+</p>
+<a name="Tips"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Tips">4. Tips</a></h1>
+
+<ul>
+<li>
+For streaming at very low bitrate application, use a low frame rate
+and a small GOP size. This is especially true for RealVideo where
+the Linux player does not seem to be very fast, so it can miss
+frames. An example is:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -g 3 -r 3 -t 10 -b 50k -s qcif -f rv10 /tmp/b.rm
+</pre></td></tr></table>
+
+</li><li>
+The parameter ’q’ which is displayed while encoding is the current
+quantizer. The value 1 indicates that a very good quality could
+be achieved. The value 31 indicates the worst quality. If q=31 appears
+too often, it means that the encoder cannot compress enough to meet
+your bitrate. You must either increase the bitrate, decrease the
+frame rate or decrease the frame size.
+
+</li><li>
+If your computer is not fast enough, you can speed up the
+compression at the expense of the compression ratio. You can use
+’-me zero’ to speed up motion estimation, and ’-intra’ to disable
+motion estimation completely (you have only I-frames, which means it
+is about as good as JPEG compression).
+
+</li><li>
+To have very low audio bitrates, reduce the sampling frequency
+(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
+
+</li><li>
+To have a constant quality (but a variable bitrate), use the option
+’-qscale n’ when ’n’ is between 1 (excellent quality) and 31 (worst
+quality).
+
+</li><li>
+When converting video files, you can use the ’-sameq’ option which
+uses the same quality factor in the encoder as in the decoder.
+It allows almost lossless encoding.
+
+</li></ul>
+
+<a name="Examples"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Examples">5. Examples</a></h1>
+
+<a name="Video-and-Audio-grabbing"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Video-and-Audio-grabbing">5.1 Video and Audio grabbing</a></h2>
+
+<p>If you specify the input format and device then ffmpeg can grab video
+and audio directly.
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
+</pre></td></tr></table>
+
+<p>Note that you must activate the right video source and channel before
+launching ffmpeg with any TV viewer such as
+<a href="http://linux.bytesex.org/xawtv/">xawtv</a> by Gerd Knorr. You also
+have to set the audio recording levels correctly with a
+standard mixer.
+</p>
+<a name="X11-grabbing"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-X11-grabbing">5.2 X11 grabbing</a></h2>
+
+<p>Grab the X11 display with ffmpeg via
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg
+</pre></td></tr></table>
+
+<p>0.0 is display.screen number of your X11 server, same as
+the DISPLAY environment variable.
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -f x11grab -s cif -r 25 -i :0.0+10,20 /tmp/out.mpg
+</pre></td></tr></table>
+
+<p>0.0 is display.screen number of your X11 server, same as the DISPLAY environment
+variable. 10 is the x-offset and 20 the y-offset for the grabbing.
+</p>
+<a name="Video-and-Audio-file-format-conversion"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Video-and-Audio-file-format-conversion">5.3 Video and Audio file format conversion</a></h2>
+
+<p>Any supported file format and protocol can serve as input to ffmpeg:
+</p>
+<p>Examples:
+</p><ul>
+<li>
+You can use YUV files as input:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
+</pre></td></tr></table>
+
+<p>It will use the files:
+</p><table><tr><td> </td><td><pre class="example">/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
+/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
+</pre></td></tr></table>
+
+<p>The Y files use twice the resolution of the U and V files. They are
+raw files, without header. They can be generated by all decent video
+decoders. You must specify the size of the image with the ‘<samp>-s</samp>’ option
+if ffmpeg cannot guess it.
+</p>
+</li><li>
+You can input from a raw YUV420P file:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i /tmp/test.yuv /tmp/out.avi
+</pre></td></tr></table>
+
+<p>test.yuv is a file containing raw YUV planar data. Each frame is composed
+of the Y plane followed by the U and V planes at half vertical and
+horizontal resolution.
+</p>
+</li><li>
+You can output to a raw YUV420P file:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i mydivx.avi hugefile.yuv
+</pre></td></tr></table>
+
+</li><li>
+You can set several input files and output files:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
+</pre></td></tr></table>
+
+<p>Converts the audio file a.wav and the raw YUV video file a.yuv
+to MPEG file a.mpg.
+</p>
+</li><li>
+You can also do audio and video conversions at the same time:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
+</pre></td></tr></table>
+
+<p>Converts a.wav to MPEG audio at 22050 Hz sample rate.
+</p>
+</li><li>
+You can encode to several formats at the same time and define a
+mapping from input stream to output streams:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i /tmp/a.wav -ab 64k /tmp/a.mp2 -ab 128k /tmp/b.mp2 -map 0:0 -map 0:0
+</pre></td></tr></table>
+
+<p>Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. ’-map
+file:index’ specifies which input stream is used for each output
+stream, in the order of the definition of output streams.
+</p>
+</li><li>
+You can transcode decrypted VOBs:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec libmp3lame -ab 128k snatch.avi
+</pre></td></tr></table>
+
+<p>This is a typical DVD ripping example; the input is a VOB file, the
+output an AVI file with MPEG-4 video and MP3 audio. Note that in this
+command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
+GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
+input video. Furthermore, the audio stream is MP3-encoded so you need
+to enable LAME support by passing <code>--enable-libmp3lame</code> to configure.
+The mapping is particularly useful for DVD transcoding
+to get the desired audio language.
+</p>
+<p>NOTE: To see the supported input formats, use <code>ffmpeg -formats</code>.
+</p>
+</li><li>
+You can extract images from a video, or create a video from many images:
+
+<p>For extracting images from a video:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
+</pre></td></tr></table>
+
+<p>This will extract one video frame per second from the video and will
+output them in files named ‘<tt>foo-001.jpeg</tt>’, ‘<tt>foo-002.jpeg</tt>’,
+etc. Images will be rescaled to fit the new WxH values.
+</p>
+<p>If you want to extract just a limited number of frames, you can use the
+above command in combination with the -vframes or -t option, or in
+combination with -ss to start extracting from a certain point in time.
+</p>
+<p>For creating a video from many images:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
+</pre></td></tr></table>
+
+<p>The syntax <code>foo-%03d.jpeg</code> specifies to use a decimal number
+composed of three digits padded with zeroes to express the sequence
+number. It is the same syntax supported by the C printf function, but
+only formats accepting a normal integer are suitable.
+</p>
+</li><li>
+You can put many streams of the same type in the output:
+
+<table><tr><td> </td><td><pre class="example">ffmpeg -i test1.avi -i test2.avi -vcodec copy -acodec copy -vcodec copy -acodec copy test12.avi -newvideo -newaudio
+</pre></td></tr></table>
+
+<p>In addition to the first video and audio streams, the resulting
+output file ‘<tt>test12.avi</tt>’ will contain the second video
+and the second audio stream found in the input streams list.
+</p>
+<p>The <code>-newvideo</code>, <code>-newaudio</code> and <code>-newsubtitle</code>
+options have to be specified immediately after the name of the output
+file to which you want to add them.
+</p>
+</li></ul>
+
+<a name="Expression-Evaluation"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Expression-Evaluation">6. Expression Evaluation</a></h1>
+
+<p>When evaluating an arithemetic expression, FFmpeg uses an internal
+formula evaluator, implemented through the ‘<tt>libavutil/eval.h</tt>’
+interface.
+</p>
+<p>An expression may contain unary, binary operators, constants, and
+functions.
+</p>
+<p>Two expressions <var>expr1</var> and <var>expr2</var> can be combined to form
+another expression "<var>expr1</var>;<var>expr2</var>".
+<var>expr1</var> and <var>expr2</var> are evaluated in turn, and the new
+expression evaluates to the value of <var>expr2</var>.
+</p>
+<p>The following binary operators are available: <code>+</code>, <code>-</code>,
+<code>*</code>, <code>/</code>, <code>^</code>.
+</p>
+<p>The following unary operators are available: <code>+</code>, <code>-</code>.
+</p>
+<p>The following functions are available:
+</p><dl compact="compact">
+<dt> ‘<samp>sinh(x)</samp>’</dt>
+<dt> ‘<samp>cosh(x)</samp>’</dt>
+<dt> ‘<samp>tanh(x)</samp>’</dt>
+<dt> ‘<samp>sin(x)</samp>’</dt>
+<dt> ‘<samp>cos(x)</samp>’</dt>
+<dt> ‘<samp>tan(x)</samp>’</dt>
+<dt> ‘<samp>atan(x)</samp>’</dt>
+<dt> ‘<samp>asin(x)</samp>’</dt>
+<dt> ‘<samp>acos(x)</samp>’</dt>
+<dt> ‘<samp>exp(x)</samp>’</dt>
+<dt> ‘<samp>log(x)</samp>’</dt>
+<dt> ‘<samp>abs(x)</samp>’</dt>
+<dt> ‘<samp>squish(x)</samp>’</dt>
+<dt> ‘<samp>gauss(x)</samp>’</dt>
+<dt> ‘<samp>isnan(x)</samp>’</dt>
+<dd><p>Return 1.0 if <var>x</var> is NAN, 0.0 otherwise.
+</p>
+</dd>
+<dt> ‘<samp>mod(x, y)</samp>’</dt>
+<dt> ‘<samp>max(x, y)</samp>’</dt>
+<dt> ‘<samp>min(x, y)</samp>’</dt>
+<dt> ‘<samp>eq(x, y)</samp>’</dt>
+<dt> ‘<samp>gte(x, y)</samp>’</dt>
+<dt> ‘<samp>gt(x, y)</samp>’</dt>
+<dt> ‘<samp>lte(x, y)</samp>’</dt>
+<dt> ‘<samp>lt(x, y)</samp>’</dt>
+<dt> ‘<samp>st(var, expr)</samp>’</dt>
+<dd><p>Allow to store the value of the expression <var>expr</var> in an internal
+variable. <var>var</var> specifies the number of the variable where to
+store the value, and it is a value ranging from 0 to 9. The function
+returns the value stored in the internal variable.
+</p>
+</dd>
+<dt> ‘<samp>ld(var)</samp>’</dt>
+<dd><p>Allow to load the value of the internal variable with number
+<var>var</var>, which was previosly stored with st(<var>var</var>, <var>expr</var>).
+The function returns the loaded value.
+</p>
+</dd>
+<dt> ‘<samp>while(cond, expr)</samp>’</dt>
+<dd><p>Evaluate expression <var>expr</var> while the expression <var>cond</var> is
+non-zero, and returns the value of the last <var>expr</var> evaluation, or
+NAN if <var>cond</var> was always false.
+</p>
+</dd>
+<dt> ‘<samp>ceil(expr)</samp>’</dt>
+<dd><p>Round the value of expression <var>expr</var> upwards to the nearest
+integer. For example, "ceil(1.5)" is "2.0".
+</p>
+</dd>
+<dt> ‘<samp>floor(expr)</samp>’</dt>
+<dd><p>Round the value of expression <var>expr</var> downwards to the nearest
+integer. For example, "floor(-1.5)" is "-2.0".
+</p>
+</dd>
+<dt> ‘<samp>trunc(expr)</samp>’</dt>
+<dd><p>Round the value of expression <var>expr</var> towards zero to the nearest
+integer. For example, "trunc(-1.5)" is "-1.0".
+</p>
+</dd>
+<dt> ‘<samp>sqrt(expr)</samp>’</dt>
+<dd><p>Compute the square root of <var>expr</var>. This is equivalent to
+"(<var>expr</var>)^.5".
+</p>
+</dd>
+<dt> ‘<samp>not(expr)</samp>’</dt>
+<dd><p>Return 1.0 if <var>expr</var> is zero, 0.0 otherwise.
+</p>
+</dd>
+<dt> ‘<samp>pow(x, y)</samp>’</dt>
+<dd><p>Compute the power of <var>x</var> elevated <var>y</var>, it is equivalent to
+"(<var>x</var>)^(<var>y</var>)".
+</p></dd>
+</dl>
+
+<p>Note that:
+</p>
+<p><code>*</code> works like AND
+</p>
+<p><code>+</code> works like OR
+</p>
+<p>thus
+</p><table><tr><td> </td><td><pre class="example">if A then B else C
+</pre></td></tr></table>
+<p>is equivalent to
+</p><table><tr><td> </td><td><pre class="example">A*B + not(A)*C
+</pre></td></tr></table>
+
+<p>In your C code, you can extend the list of unary and binary functions,
+and define recognized constants, so that they are available for your
+expressions.
+</p>
+<p>The evaluator also recognizes the International System number
+postfixes. If ’i’ is appended after the postfix, powers of 2 are used
+instead of powers of 10. The ’B’ postfix multiplies the value for 8,
+and can be appended after another postfix or used alone. This allows
+using for example ’KB’, ’MiB’, ’G’ and ’B’ as postfix.
+</p>
+<p>Follows the list of available International System postfixes, with
+indication of the corresponding powers of 10 and of 2.
+</p><dl compact="compact">
+<dt> ‘<samp>y</samp>’</dt>
+<dd><p>-24 / -80
+</p></dd>
+<dt> ‘<samp>z</samp>’</dt>
+<dd><p>-21 / -70
+</p></dd>
+<dt> ‘<samp>a</samp>’</dt>
+<dd><p>-18 / -60
+</p></dd>
+<dt> ‘<samp>f</samp>’</dt>
+<dd><p>-15 / -50
+</p></dd>
+<dt> ‘<samp>p</samp>’</dt>
+<dd><p>-12 / -40
+</p></dd>
+<dt> ‘<samp>n</samp>’</dt>
+<dd><p>-9 / -30
+</p></dd>
+<dt> ‘<samp>u</samp>’</dt>
+<dd><p>-6 / -20
+</p></dd>
+<dt> ‘<samp>m</samp>’</dt>
+<dd><p>-3 / -10
+</p></dd>
+<dt> ‘<samp>c</samp>’</dt>
+<dd><p>-2
+</p></dd>
+<dt> ‘<samp>d</samp>’</dt>
+<dd><p>-1
+</p></dd>
+<dt> ‘<samp>h</samp>’</dt>
+<dd><p>2
+</p></dd>
+<dt> ‘<samp>k</samp>’</dt>
+<dd><p>3 / 10
+</p></dd>
+<dt> ‘<samp>K</samp>’</dt>
+<dd><p>3 / 10
+</p></dd>
+<dt> ‘<samp>M</samp>’</dt>
+<dd><p>6 / 20
+</p></dd>
+<dt> ‘<samp>G</samp>’</dt>
+<dd><p>9 / 30
+</p></dd>
+<dt> ‘<samp>T</samp>’</dt>
+<dd><p>12 / 40
+</p></dd>
+<dt> ‘<samp>P</samp>’</dt>
+<dd><p>15 / 40
+</p></dd>
+<dt> ‘<samp>E</samp>’</dt>
+<dd><p>18 / 50
+</p></dd>
+<dt> ‘<samp>Z</samp>’</dt>
+<dd><p>21 / 60
+</p></dd>
+<dt> ‘<samp>Y</samp>’</dt>
+<dd><p>24 / 70
+</p></dd>
+</dl>
+
+<a name="Decoders"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Decoders">7. Decoders</a></h1>
+
+<p>Decoders are configured elements in FFmpeg which allow the decoding of
+multimedia streams.
+</p>
+<p>When you configure your FFmpeg build, all the supported native decoders
+are enabled by default. Decoders requiring an external library must be enabled
+manually via the corresponding <code>--enable-lib</code> option. You can list all
+available decoders using the configure option <code>--list-decoders</code>.
+</p>
+<p>You can disable all the decoders with the configure option
+<code>--disable-decoders</code> and selectively enable / disable single decoders
+with the options <code>--enable-decoder=<var>DECODER</var></code> /
+<code>--disable-decoder=<var>DECODER</var></code>.
+</p>
+<p>The option <code>-codecs</code> of the ff* tools will display the list of
+enabled decoders.
+</p>
+
+<a name="Video-Decoders"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Video-Decoders">8. Video Decoders</a></h1>
+
+<p>A description of some of the currently available video decoders
+follows.
+</p>
+<a name="rawvideo"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rawvideo">8.1 rawvideo</a></h2>
+
+<p>Rawvideo decoder.
+</p>
+<p>This decoder decodes rawvideo streams.
+</p>
+<a name="Options-2"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Options-2">8.1.1 Options</a></h3>
+
+<dl compact="compact">
+<dt> ‘<samp>top <var>top_field_first</var></samp>’</dt>
+<dd><p>Specify the assumed field type of the input video.
+</p><dl compact="compact">
+<dt> ‘<samp>-1</samp>’</dt>
+<dd><p>the video is assumed to be progressive (default)
+</p></dd>
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>bottom-field-first is assumed
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>top-field-first is assumed
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Encoders"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Encoders">9. Encoders</a></h1>
+
+<p>Encoders are configured elements in FFmpeg which allow the encoding of
+multimedia streams.
+</p>
+<p>When you configure your FFmpeg build, all the supported native encoders
+are enabled by default. Encoders requiring an external library must be enabled
+manually via the corresponding <code>--enable-lib</code> option. You can list all
+available encoders using the configure option <code>--list-encoders</code>.
+</p>
+<p>You can disable all the encoders with the configure option
+<code>--disable-encoders</code> and selectively enable / disable single encoders
+with the options <code>--enable-encoder=<var>ENCODER</var></code> /
+<code>--disable-encoder=<var>ENCODER</var></code>.
+</p>
+<p>The option <code>-codecs</code> of the ff* tools will display the list of
+enabled encoders.
+</p>
+
+<a name="Audio-Encoders"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Audio-Encoders">10. Audio Encoders</a></h1>
+
+<p>A description of some of the currently available audio encoders
+follows.
+</p>
+<a name="ac3-and-ac3_005ffixed"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-ac3-and-ac3_005ffixed">10.1 ac3 and ac3_fixed</a></h2>
+
+<p>AC-3 audio encoders.
+</p>
+<p>These encoders implement part of ATSC A/52:2010 and ETSI TS 102 366, as well as
+the undocumented RealAudio 3 (a.k.a. dnet).
+</p>
+<p>The <var>ac3</var> encoder uses floating-point math, while the <var>ac3_fixed</var>
+encoder only uses fixed-point integer math. This does not mean that one is
+always faster, just that one or the other may be better suited to a
+particular system. The floating-point encoder will generally produce better
+quality audio for a given bitrate. The <var>ac3_fixed</var> encoder is not the
+default codec for any of the output formats, so it must be specified explicitly
+using the option <code>-acodec ac3_fixed</code> in order to use it.
+</p>
+<a name="AC_002d3-Metadata"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-AC_002d3-Metadata">10.1.1 AC-3 Metadata</a></h3>
+
+<p>The AC-3 metadata options are used to set parameters that describe the audio,
+but in most cases do not affect the audio encoding itself. Some of the options
+do directly affect or influence the decoding and playback of the resulting
+bitstream, while others are just for informational purposes. A few of the
+options will add bits to the output stream that could otherwise be used for
+audio data, and will thus affect the quality of the output. Those will be
+indicated accordingly with a note in the option list below.
+</p>
+<p>These parameters are described in detail in several publicly-available
+documents.
+</p><ul>
+<li> <a href="http://www.atsc.org/cms/standards/a_52-2010.pdf">A/52:2010 - Digital Audio Compression (AC-3) (E-AC-3) Standard</a>
+</li><li> <a href="http://www.atsc.org/cms/standards/a_54a_with_corr_1.pdf">A/54 - Guide to the Use of the ATSC Digital Television Standard</a>
+</li><li> <a href="http://www.dolby.com/uploadedFiles/zz-_Shared_Assets/English_PDFs/Professional/18_Metadata.Guide.pdf">Dolby Metadata Guide</a>
+</li><li> <a href="http://www.dolby.com/uploadedFiles/zz-_Shared_Assets/English_PDFs/Professional/46_DDEncodingGuidelines.pdf">Dolby Digital Professional Encoding Guidelines</a>
+</li></ul>
+
+<a name="Metadata-Control-Options"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Metadata-Control-Options">10.1.1.1 Metadata Control Options</a></h4>
+
+<dl compact="compact">
+<dt> ‘<samp>-per_frame_metadata <var>boolean</var></samp>’</dt>
+<dd><p>Allow Per-Frame Metadata. Specifies if the encoder should check for changing
+metadata for each frame.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>The metadata values set at initialization will be used for every frame in the
+stream. (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>Metadata values can be changed before encoding each frame.
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Downmix-Levels"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Downmix-Levels">10.1.1.2 Downmix Levels</a></h4>
+
+<dl compact="compact">
+<dt> ‘<samp>-center_mixlev <var>level</var></samp>’</dt>
+<dd><p>Center Mix Level. The amount of gain the decoder should apply to the center
+channel when downmixing to stereo. This field will only be written to the
+bitstream if a center channel is present. The value is specified as a scale
+factor. There are 3 valid values:
+</p><dl compact="compact">
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3dB gain
+</p></dd>
+<dt> ‘<samp>0.595</samp>’</dt>
+<dd><p>Apply -4.5dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6dB gain
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-surround_mixlev <var>level</var></samp>’</dt>
+<dd><p>Surround Mix Level. The amount of gain the decoder should apply to the surround
+channel(s) when downmixing to stereo. This field will only be written to the
+bitstream if one or more surround channels are present. The value is specified
+as a scale factor. There are 3 valid values:
+</p><dl compact="compact">
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3dB gain
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.000</samp>’</dt>
+<dd><p>Silence Surround Channel(s)
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Audio-Production-Information"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Audio-Production-Information">10.1.1.3 Audio Production Information</a></h4>
+<p>Audio Production Information is optional information describing the mixing
+environment. Either none or both of the fields are written to the bitstream.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>-mixing_level <var>number</var></samp>’</dt>
+<dd><p>Mixing Level. Specifies peak sound pressure level (SPL) in the production
+environment when the mix was mastered. Valid values are 80 to 111, or -1 for
+unknown or not indicated. The default value is -1, but that value cannot be
+used if the Audio Production Information is written to the bitstream. Therefore,
+if the <code>room_type</code> option is not the default value, the <code>mixing_level</code>
+option must not be -1.
+</p>
+</dd>
+<dt> ‘<samp>-room_type <var>type</var></samp>’</dt>
+<dd><p>Room Type. Describes the equalization used during the final mixing session at
+the studio or on the dubbing stage. A large room is a dubbing stage with the
+industry standard X-curve equalization; a small room has flat equalization.
+This field will not be written to the bitstream if both the <code>mixing_level</code>
+option and the <code>room_type</code> option have the default values.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>notindicated</samp>’</dt>
+<dd><p>Not Indicated (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>large</samp>’</dt>
+<dd><p>Large Room
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dt> ‘<samp>small</samp>’</dt>
+<dd><p>Small Room
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Other-Metadata-Options"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Other-Metadata-Options">10.1.1.4 Other Metadata Options</a></h4>
+
+<dl compact="compact">
+<dt> ‘<samp>-copyright <var>boolean</var></samp>’</dt>
+<dd><p>Copyright Indicator. Specifies whether a copyright exists for this audio.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>No Copyright Exists (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Copyright Exists
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-dialnorm <var>value</var></samp>’</dt>
+<dd><p>Dialogue Normalization. Indicates how far the average dialogue level of the
+program is below digital 100% full scale (0 dBFS). This parameter determines a
+level shift during audio reproduction that sets the average volume of the
+dialogue to a preset level. The goal is to match volume level between program
+sources. A value of -31dB will result in no volume level change, relative to
+the source volume, during audio reproduction. Valid values are whole numbers in
+the range -31 to -1, with -31 being the default.
+</p>
+</dd>
+<dt> ‘<samp>-dsur_mode <var>mode</var></samp>’</dt>
+<dd><p>Dolby Surround Mode. Specifies whether the stereo signal uses Dolby Surround
+(Pro Logic). This field will only be written to the bitstream if the audio
+stream is stereo. Using this option does <b>NOT</b> mean the encoder will actually
+apply Dolby Surround processing.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>notindicated</samp>’</dt>
+<dd><p>Not Indicated (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>Not Dolby Surround Encoded
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Dolby Surround Encoded
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-original <var>boolean</var></samp>’</dt>
+<dd><p>Original Bit Stream Indicator. Specifies whether this audio is from the
+original source and not a copy.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>Not Original Source
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Original Source (default)
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Extended-Bitstream-Information"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Extended-Bitstream-Information">10.1.2 Extended Bitstream Information</a></h3>
+<p>The extended bitstream options are part of the Alternate Bit Stream Syntax as
+specified in Annex D of the A/52:2010 standard. It is grouped into 2 parts.
+If any one parameter in a group is specified, all values in that group will be
+written to the bitstream. Default values are used for those that are written
+but have not been specified. If the mixing levels are written, the decoder
+will use these values instead of the ones specified in the <code>center_mixlev</code>
+and <code>surround_mixlev</code> options if it supports the Alternate Bit Stream
+Syntax.
+</p>
+<a name="Extended-Bitstream-Information-_002d-Part-1"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Extended-Bitstream-Information-_002d-Part-1">10.1.2.1 Extended Bitstream Information - Part 1</a></h4>
+
+<dl compact="compact">
+<dt> ‘<samp>-dmix_mode <var>mode</var></samp>’</dt>
+<dd><p>Preferred Stereo Downmix Mode. Allows the user to select either Lt/Rt
+(Dolby Surround) or Lo/Ro (normal stereo) as the preferred stereo downmix mode.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>notindicated</samp>’</dt>
+<dd><p>Not Indicated (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>ltrt</samp>’</dt>
+<dd><p>Lt/Rt Downmix Preferred
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dt> ‘<samp>loro</samp>’</dt>
+<dd><p>Lo/Ro Downmix Preferred
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-ltrt_cmixlev <var>level</var></samp>’</dt>
+<dd><p>Lt/Rt Center Mix Level. The amount of gain the decoder should apply to the
+center channel when downmixing to stereo in Lt/Rt mode.
+</p><dl compact="compact">
+<dt> ‘<samp>1.414</samp>’</dt>
+<dd><p>Apply +3dB gain
+</p></dd>
+<dt> ‘<samp>1.189</samp>’</dt>
+<dd><p>Apply +1.5dB gain
+</p></dd>
+<dt> ‘<samp>1.000</samp>’</dt>
+<dd><p>Apply 0dB gain
+</p></dd>
+<dt> ‘<samp>0.841</samp>’</dt>
+<dd><p>Apply -1.5dB gain
+</p></dd>
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3.0dB gain
+</p></dd>
+<dt> ‘<samp>0.595</samp>’</dt>
+<dd><p>Apply -4.5dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6.0dB gain
+</p></dd>
+<dt> ‘<samp>0.000</samp>’</dt>
+<dd><p>Silence Center Channel
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-ltrt_surmixlev <var>level</var></samp>’</dt>
+<dd><p>Lt/Rt Surround Mix Level. The amount of gain the decoder should apply to the
+surround channel(s) when downmixing to stereo in Lt/Rt mode.
+</p><dl compact="compact">
+<dt> ‘<samp>0.841</samp>’</dt>
+<dd><p>Apply -1.5dB gain
+</p></dd>
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3.0dB gain
+</p></dd>
+<dt> ‘<samp>0.595</samp>’</dt>
+<dd><p>Apply -4.5dB gain
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6.0dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.000</samp>’</dt>
+<dd><p>Silence Surround Channel(s)
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-loro_cmixlev <var>level</var></samp>’</dt>
+<dd><p>Lo/Ro Center Mix Level. The amount of gain the decoder should apply to the
+center channel when downmixing to stereo in Lo/Ro mode.
+</p><dl compact="compact">
+<dt> ‘<samp>1.414</samp>’</dt>
+<dd><p>Apply +3dB gain
+</p></dd>
+<dt> ‘<samp>1.189</samp>’</dt>
+<dd><p>Apply +1.5dB gain
+</p></dd>
+<dt> ‘<samp>1.000</samp>’</dt>
+<dd><p>Apply 0dB gain
+</p></dd>
+<dt> ‘<samp>0.841</samp>’</dt>
+<dd><p>Apply -1.5dB gain
+</p></dd>
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3.0dB gain
+</p></dd>
+<dt> ‘<samp>0.595</samp>’</dt>
+<dd><p>Apply -4.5dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6.0dB gain
+</p></dd>
+<dt> ‘<samp>0.000</samp>’</dt>
+<dd><p>Silence Center Channel
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-loro_surmixlev <var>level</var></samp>’</dt>
+<dd><p>Lo/Ro Surround Mix Level. The amount of gain the decoder should apply to the
+surround channel(s) when downmixing to stereo in Lo/Ro mode.
+</p><dl compact="compact">
+<dt> ‘<samp>0.841</samp>’</dt>
+<dd><p>Apply -1.5dB gain
+</p></dd>
+<dt> ‘<samp>0.707</samp>’</dt>
+<dd><p>Apply -3.0dB gain
+</p></dd>
+<dt> ‘<samp>0.595</samp>’</dt>
+<dd><p>Apply -4.5dB gain
+</p></dd>
+<dt> ‘<samp>0.500</samp>’</dt>
+<dd><p>Apply -6.0dB gain (default)
+</p></dd>
+<dt> ‘<samp>0.000</samp>’</dt>
+<dd><p>Silence Surround Channel(s)
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Extended-Bitstream-Information-_002d-Part-2"></a>
+<h4 class="subsubsection"><a href="ffmpeg.html#toc-Extended-Bitstream-Information-_002d-Part-2">10.1.2.2 Extended Bitstream Information - Part 2</a></h4>
+
+<dl compact="compact">
+<dt> ‘<samp>-dsurex_mode <var>mode</var></samp>’</dt>
+<dd><p>Dolby Surround EX Mode. Indicates whether the stream uses Dolby Surround EX
+(7.1 matrixed to 5.1). Using this option does <b>NOT</b> mean the encoder will actually
+apply Dolby Surround EX processing.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>notindicated</samp>’</dt>
+<dd><p>Not Indicated (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Dolby Surround EX On
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>Dolby Surround EX Off
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-dheadphone_mode <var>mode</var></samp>’</dt>
+<dd><p>Dolby Headphone Mode. Indicates whether the stream uses Dolby Headphone
+encoding (multi-channel matrixed to 2.0 for use with headphones). Using this
+option does <b>NOT</b> mean the encoder will actually apply Dolby Headphone
+processing.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>notindicated</samp>’</dt>
+<dd><p>Not Indicated (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Dolby Headphone On
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>Dolby Headphone Off
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-ad_conv_type <var>type</var></samp>’</dt>
+<dd><p>A/D Converter Type. Indicates whether the audio has passed through HDCD A/D
+conversion.
+</p><dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>standard</samp>’</dt>
+<dd><p>Standard A/D Converter (default)
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>hdcd</samp>’</dt>
+<dd><p>HDCD A/D Converter
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+<a name="Other-AC_002d3-Encoding-Options"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Other-AC_002d3-Encoding-Options">10.1.3 Other AC-3 Encoding Options</a></h3>
+
+<dl compact="compact">
+<dt> ‘<samp>-stereo_rematrixing <var>boolean</var></samp>’</dt>
+<dd><p>Stereo Rematrixing. Enables/Disables use of rematrixing for stereo input. This
+is an optional AC-3 feature that increases quality by selectively encoding
+the left/right channels as mid/side. This option is enabled by default, and it
+is highly recommended that it be left as enabled except for testing purposes.
+</p>
+</dd>
+</dl>
+
+<a name="Floating_002dPoint_002dOnly-AC_002d3-Encoding-Options"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Floating_002dPoint_002dOnly-AC_002d3-Encoding-Options">10.1.4 Floating-Point-Only AC-3 Encoding Options</a></h3>
+
+<p>These options are only valid for the floating-point encoder and do not exist
+for the fixed-point encoder due to the corresponding features not being
+implemented in fixed-point.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>-channel_coupling <var>boolean</var></samp>’</dt>
+<dd><p>Enables/Disables use of channel coupling, which is an optional AC-3 feature
+that increases quality by combining high frequency information from multiple
+channels into a single channel. The per-channel high frequency information is
+sent with less accuracy in both the frequency and time domains. This allows
+more bits to be used for lower frequencies while preserving enough information
+to reconstruct the high frequencies. This option is enabled by default for the
+floating-point encoder and should generally be left as enabled except for
+testing purposes or to increase encoding speed.
+</p><dl compact="compact">
+<dt> ‘<samp>-1</samp>’</dt>
+<dt> ‘<samp>auto</samp>’</dt>
+<dd><p>Selected by Encoder (default)
+</p></dd>
+<dt> ‘<samp>0</samp>’</dt>
+<dt> ‘<samp>off</samp>’</dt>
+<dd><p>Disable Channel Coupling
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dt> ‘<samp>on</samp>’</dt>
+<dd><p>Enable Channel Coupling
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>-cpl_start_band <var>number</var></samp>’</dt>
+<dd><p>Coupling Start Band. Sets the channel coupling start band, from 1 to 15. If a
+value higher than the bandwidth is used, it will be reduced to 1 less than the
+coupling end band. If <var>auto</var> is used, the start band will be determined by
+the encoder based on the bit rate, sample rate, and channel layout. This option
+has no effect if channel coupling is disabled.
+</p><dl compact="compact">
+<dt> ‘<samp>-1</samp>’</dt>
+<dt> ‘<samp>auto</samp>’</dt>
+<dd><p>Selected by Encoder (default)
+</p></dd>
+</dl>
+
+</dd>
+</dl>
+
+
+<a name="Video-Encoders"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Video-Encoders">11. Video Encoders</a></h1>
+
+<p>A description of some of the currently available video encoders
+follows.
+</p>
+<a name="libvpx"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-libvpx">11.1 libvpx</a></h2>
+
+<p>VP8 format supported through libvpx.
+</p>
+<p>Requires the presence of the libvpx headers and library during configuration.
+You need to explicitly configure the build with <code>--enable-libvpx</code>.
+</p>
+<a name="Options-3"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Options-3">11.1.1 Options</a></h3>
+
+<p>Mapping from FFmpeg to libvpx options with conversion notes in parentheses.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>threads</samp>’</dt>
+<dd><p>g_threads
+</p>
+</dd>
+<dt> ‘<samp>profile</samp>’</dt>
+<dd><p>g_profile
+</p>
+</dd>
+<dt> ‘<samp>vb</samp>’</dt>
+<dd><p>rc_target_bitrate
+</p>
+</dd>
+<dt> ‘<samp>g</samp>’</dt>
+<dd><p>kf_max_dist
+</p>
+</dd>
+<dt> ‘<samp>keyint_min</samp>’</dt>
+<dd><p>kf_min_dist
+</p>
+</dd>
+<dt> ‘<samp>qmin</samp>’</dt>
+<dd><p>rc_min_quantizer
+</p>
+</dd>
+<dt> ‘<samp>qmax</samp>’</dt>
+<dd><p>rc_max_quantizer
+</p>
+</dd>
+<dt> ‘<samp>bufsize, vb</samp>’</dt>
+<dd><p>rc_buf_sz
+<code>(bufsize * 1000 / vb)</code>
+</p>
+<p>rc_buf_optimal_sz
+<code>(bufsize * 1000 / vb * 5 / 6)</code>
+</p>
+</dd>
+<dt> ‘<samp>rc_init_occupancy, vb</samp>’</dt>
+<dd><p>rc_buf_initial_sz
+<code>(rc_init_occupancy * 1000 / vb)</code>
+</p>
+</dd>
+<dt> ‘<samp>rc_buffer_aggressivity</samp>’</dt>
+<dd><p>rc_undershoot_pct
+</p>
+</dd>
+<dt> ‘<samp>skip_threshold</samp>’</dt>
+<dd><p>rc_dropframe_thresh
+</p>
+</dd>
+<dt> ‘<samp>qcomp</samp>’</dt>
+<dd><p>rc_2pass_vbr_bias_pct
+</p>
+</dd>
+<dt> ‘<samp>maxrate, vb</samp>’</dt>
+<dd><p>rc_2pass_vbr_maxsection_pct
+<code>(maxrate * 100 / vb)</code>
+</p>
+</dd>
+<dt> ‘<samp>minrate, vb</samp>’</dt>
+<dd><p>rc_2pass_vbr_minsection_pct
+<code>(minrate * 100 / vb)</code>
+</p>
+</dd>
+<dt> ‘<samp>minrate, maxrate, vb</samp>’</dt>
+<dd><p><code>VPX_CBR</code>
+<code>(minrate == maxrate == vb)</code>
+</p>
+</dd>
+<dt> ‘<samp>crf</samp>’</dt>
+<dd><p><code>VPX_CQ</code>, <code>VP8E_SET_CQ_LEVEL</code>
+</p>
+</dd>
+<dt> ‘<samp>quality</samp>’</dt>
+<dd><dl compact="compact">
+<dt> ‘<samp><var>best</var></samp>’</dt>
+<dd><p><code>VPX_DL_BEST_QUALITY</code>
+</p></dd>
+<dt> ‘<samp><var>good</var></samp>’</dt>
+<dd><p><code>VPX_DL_GOOD_QUALITY</code>
+</p></dd>
+<dt> ‘<samp><var>realtime</var></samp>’</dt>
+<dd><p><code>VPX_DL_REALTIME</code>
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>speed</samp>’</dt>
+<dd><p><code>VP8E_SET_CPUUSED</code>
+</p>
+</dd>
+<dt> ‘<samp>nr</samp>’</dt>
+<dd><p><code>VP8E_SET_NOISE_SENSITIVITY</code>
+</p>
+</dd>
+<dt> ‘<samp>mb_threshold</samp>’</dt>
+<dd><p><code>VP8E_SET_STATIC_THRESHOLD</code>
+</p>
+</dd>
+<dt> ‘<samp>slices</samp>’</dt>
+<dd><p><code>VP8E_SET_TOKEN_PARTITIONS</code>
+</p>
+</dd>
+<dt> ‘<samp>Alternate reference frame related</samp>’</dt>
+<dd><dl compact="compact">
+<dt> ‘<samp>vp8flags altref</samp>’</dt>
+<dd><p><code>VP8E_SET_ENABLEAUTOALTREF</code>
+</p></dd>
+<dt> ‘<samp><var>arnr_max_frames</var></samp>’</dt>
+<dd><p><code>VP8E_SET_ARNR_MAXFRAMES</code>
+</p></dd>
+<dt> ‘<samp><var>arnr_type</var></samp>’</dt>
+<dd><p><code>VP8E_SET_ARNR_TYPE</code>
+</p></dd>
+<dt> ‘<samp><var>arnr_strength</var></samp>’</dt>
+<dd><p><code>VP8E_SET_ARNR_STRENGTH</code>
+</p></dd>
+<dt> ‘<samp><var>rc_lookahead</var></samp>’</dt>
+<dd><p>g_lag_in_frames
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>vp8flags error_resilient</samp>’</dt>
+<dd><p>g_error_resilient
+</p>
+</dd>
+</dl>
+
+<p>For more information about libvpx see:
+<a href="http://www.webmproject.org/">http://www.webmproject.org/</a>
+</p>
+<a name="libx264"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-libx264">11.2 libx264</a></h2>
+
+<p>H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 format supported through
+libx264.
+</p>
+<p>Requires the presence of the libx264 headers and library during
+configuration. You need to explicitly configure the build with
+<code>--enable-libx264</code>.
+</p>
+<a name="Options-4"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Options-4">11.2.1 Options</a></h3>
+
+<dl compact="compact">
+<dt> ‘<samp>preset <var>preset_name</var></samp>’</dt>
+<dd><p>Set the encoding preset.
+</p>
+</dd>
+<dt> ‘<samp>tune <var>tune_name</var></samp>’</dt>
+<dd><p>Tune the encoding params.
+Deprecated in favor of <var>x264_opts</var>
+</p>
+</dd>
+<dt> ‘<samp>fastfirstpass <var>bool</var></samp>’</dt>
+<dd><p>Use fast settings when encoding first pass, default value is 1.
+Deprecated in favor of <var>x264_opts</var>.
+</p>
+</dd>
+<dt> ‘<samp>profile <var>profile_name</var></samp>’</dt>
+<dd><p>Set profile restrictions.
+Deprecated in favor of <var>x264_opts</var>.
+</p>
+</dd>
+<dt> ‘<samp>level <var>level</var></samp>’</dt>
+<dd><p>Specify level (as defined by Annex A).
+Deprecated in favor of <var>x264_opts</var>.
+</p>
+</dd>
+<dt> ‘<samp>passlogfile <var>filename</var></samp>’</dt>
+<dd><p>Specify filename for 2 pass stats.
+Deprecated in favor of <var>x264_opts</var>.
+</p>
+</dd>
+<dt> ‘<samp>wpredp <var>wpred_type</var></samp>’</dt>
+<dd><p>Specify Weighted prediction for P-frames.
+Deprecated in favor of <var>x264_opts</var>.
+</p>
+</dd>
+<dt> ‘<samp>x264opts <var>options</var></samp>’</dt>
+<dd><p>Allow to set any x264 option, see x264 manual for a list.
+</p>
+<p><var>options</var> is a list of <var>key</var>=<var>value</var> couples separated by
+":".
+</p></dd>
+</dl>
+
+<p>For example to specify libx264 encoding options with ‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i foo.mpg -vcodec libx264 -x264opts keyint=123:min-keyint=20 -an out.mkv
+</pre></td></tr></table>
+
+<p>For more information about libx264 and the supported options see:
+<a href="http://www.videolan.org/developers/x264.html">http://www.videolan.org/developers/x264.html</a>
+</p>
+<a name="Demuxers"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Demuxers">12. Demuxers</a></h1>
+
+<p>Demuxers are configured elements in FFmpeg which allow to read the
+multimedia streams from a particular type of file.
+</p>
+<p>When you configure your FFmpeg build, all the supported demuxers
+are enabled by default. You can list all available ones using the
+configure option "–list-demuxers".
+</p>
+<p>You can disable all the demuxers using the configure option
+"–disable-demuxers", and selectively enable a single demuxer with
+the option "–enable-demuxer=<var>DEMUXER</var>", or disable it
+with the option "–disable-demuxer=<var>DEMUXER</var>".
+</p>
+<p>The option "-formats" of the ff* tools will display the list of
+enabled demuxers.
+</p>
+<p>The description of some of the currently available demuxers follows.
+</p>
+<a name="image2"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-image2">12.1 image2</a></h2>
+
+<p>Image file demuxer.
+</p>
+<p>This demuxer reads from a list of image files specified by a pattern.
+</p>
+<p>The pattern may contain the string "%d" or "%0<var>N</var>d", which
+specifies the position of the characters representing a sequential
+number in each filename matched by the pattern. If the form
+"%d0<var>N</var>d" is used, the string representing the number in each
+filename is 0-padded and <var>N</var> is the total number of 0-padded
+digits representing the number. The literal character ’%’ can be
+specified in the pattern with the string "%%".
+</p>
+<p>If the pattern contains "%d" or "%0<var>N</var>d", the first filename of
+the file list specified by the pattern must contain a number
+inclusively contained between 0 and 4, all the following numbers must
+be sequential. This limitation may be hopefully fixed.
+</p>
+<p>The pattern may contain a suffix which is used to automatically
+determine the format of the images contained in the files.
+</p>
+<p>For example the pattern "img-%03d.bmp" will match a sequence of
+filenames of the form ‘<tt>img-001.bmp</tt>’, ‘<tt>img-002.bmp</tt>’, ...,
+‘<tt>img-010.bmp</tt>’, etc.; the pattern "i%%m%%g-%d.jpg" will match a
+sequence of filenames of the form ‘<tt>i%m%g-1.jpg</tt>’,
+‘<tt>i%m%g-2.jpg</tt>’, ..., ‘<tt>i%m%g-10.jpg</tt>’, etc.
+</p>
+<p>The size, the pixel format, and the format of each image must be the
+same for all the files in the sequence.
+</p>
+<p>The following example shows how to use ‘<tt>ffmpeg</tt>’ for creating a
+video from the images in the file sequence ‘<tt>img-001.jpeg</tt>’,
+‘<tt>img-002.jpeg</tt>’, ..., assuming an input framerate of 10 frames per
+second:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -r 10 -f image2 -i 'img-%03d.jpeg' out.avi
+</pre></td></tr></table>
+
+<p>Note that the pattern must not necessarily contain "%d" or
+"%0<var>N</var>d", for example to convert a single image file
+‘<tt>img.jpeg</tt>’ you can employ the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f image2 -i img.jpeg img.png
+</pre></td></tr></table>
+
+<a name="applehttp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-applehttp">12.2 applehttp</a></h2>
+
+<p>Apple HTTP Live Streaming demuxer.
+</p>
+<p>This demuxer presents all AVStreams from all variant streams.
+The id field is set to the bitrate variant index number. By setting
+the discard flags on AVStreams (by pressing ’a’ or ’v’ in ffplay),
+the caller can decide which variant streams to actually receive.
+The total bitrate of the variant that the stream belongs to is
+available in a metadata key named "variant_bitrate".
+</p>
+<a name="Muxers"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Muxers">13. Muxers</a></h1>
+
+<p>Muxers are configured elements in FFmpeg which allow writing
+multimedia streams to a particular type of file.
+</p>
+<p>When you configure your FFmpeg build, all the supported muxers
+are enabled by default. You can list all available muxers using the
+configure option <code>--list-muxers</code>.
+</p>
+<p>You can disable all the muxers with the configure option
+<code>--disable-muxers</code> and selectively enable / disable single muxers
+with the options <code>--enable-muxer=<var>MUXER</var></code> /
+<code>--disable-muxer=<var>MUXER</var></code>.
+</p>
+<p>The option <code>-formats</code> of the ff* tools will display the list of
+enabled muxers.
+</p>
+<p>A description of some of the currently available muxers follows.
+</p>
+<p><a name="crc"></a>
+</p><a name="crc-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-crc-1">13.1 crc</a></h2>
+
+<p>CRC (Cyclic Redundancy Check) testing format.
+</p>
+<p>This muxer computes and prints the Adler-32 CRC of all the input audio
+and video frames. By default audio frames are converted to signed
+16-bit raw audio and video frames to raw video before computing the
+CRC.
+</p>
+<p>The output of the muxer consists of a single line of the form:
+CRC=0x<var>CRC</var>, where <var>CRC</var> is a hexadecimal number 0-padded to
+8 digits containing the CRC for all the decoded input frames.
+</p>
+<p>For example to compute the CRC of the input, and store it in the file
+‘<tt>out.crc</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -f crc out.crc
+</pre></td></tr></table>
+
+<p>You can print the CRC to stdout with the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -f crc -
+</pre></td></tr></table>
+
+<p>You can select the output format of each frame with ‘<tt>ffmpeg</tt>’ by
+specifying the audio and video codec and format. For example to
+compute the CRC of the input audio converted to PCM unsigned 8-bit
+and the input video converted to MPEG-2 video, use the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -acodec pcm_u8 -vcodec mpeg2video -f crc -
+</pre></td></tr></table>
+
+<p>See also the <a href="#framecrc">framecrc</a> muxer.
+</p>
+<p><a name="framecrc"></a>
+</p><a name="framecrc-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-framecrc-1">13.2 framecrc</a></h2>
+
+<p>Per-frame CRC (Cyclic Redundancy Check) testing format.
+</p>
+<p>This muxer computes and prints the Adler-32 CRC for each decoded audio
+and video frame. By default audio frames are converted to signed
+16-bit raw audio and video frames to raw video before computing the
+CRC.
+</p>
+<p>The output of the muxer consists of a line for each audio and video
+frame of the form: <var>stream_index</var>, <var>frame_dts</var>,
+<var>frame_size</var>, 0x<var>CRC</var>, where <var>CRC</var> is a hexadecimal
+number 0-padded to 8 digits containing the CRC of the decoded frame.
+</p>
+<p>For example to compute the CRC of each decoded frame in the input, and
+store it in the file ‘<tt>out.crc</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -f framecrc out.crc
+</pre></td></tr></table>
+
+<p>You can print the CRC of each decoded frame to stdout with the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -f framecrc -
+</pre></td></tr></table>
+
+<p>You can select the output format of each frame with ‘<tt>ffmpeg</tt>’ by
+specifying the audio and video codec and format. For example, to
+compute the CRC of each decoded input audio frame converted to PCM
+unsigned 8-bit and of each decoded input video frame converted to
+MPEG-2 video, use the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -acodec pcm_u8 -vcodec mpeg2video -f framecrc -
+</pre></td></tr></table>
+
+<p>See also the <a href="#crc">crc</a> muxer.
+</p>
+<a name="image2-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-image2-1">13.3 image2</a></h2>
+
+<p>Image file muxer.
+</p>
+<p>The image file muxer writes video frames to image files.
+</p>
+<p>The output filenames are specified by a pattern, which can be used to
+produce sequentially numbered series of files.
+The pattern may contain the string "%d" or "%0<var>N</var>d", this string
+specifies the position of the characters representing a numbering in
+the filenames. If the form "%0<var>N</var>d" is used, the string
+representing the number in each filename is 0-padded to <var>N</var>
+digits. The literal character ’%’ can be specified in the pattern with
+the string "%%".
+</p>
+<p>If the pattern contains "%d" or "%0<var>N</var>d", the first filename of
+the file list specified will contain the number 1, all the following
+numbers will be sequential.
+</p>
+<p>The pattern may contain a suffix which is used to automatically
+determine the format of the image files to write.
+</p>
+<p>For example the pattern "img-%03d.bmp" will specify a sequence of
+filenames of the form ‘<tt>img-001.bmp</tt>’, ‘<tt>img-002.bmp</tt>’, ...,
+‘<tt>img-010.bmp</tt>’, etc.
+The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
+form ‘<tt>img%-1.jpg</tt>’, ‘<tt>img%-2.jpg</tt>’, ..., ‘<tt>img%-10.jpg</tt>’,
+etc.
+</p>
+<p>The following example shows how to use ‘<tt>ffmpeg</tt>’ for creating a
+sequence of files ‘<tt>img-001.jpeg</tt>’, ‘<tt>img-002.jpeg</tt>’, ...,
+taking one image every second from the input video:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.avi -r 1 -f image2 'img-%03d.jpeg'
+</pre></td></tr></table>
+
+<p>Note that with ‘<tt>ffmpeg</tt>’, if the format is not specified with the
+<code>-f</code> option and the output filename specifies an image file
+format, the image2 muxer is automatically selected, so the previous
+command can be written as:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.avi -r 1 'img-%03d.jpeg'
+</pre></td></tr></table>
+
+<p>Note also that the pattern must not necessarily contain "%d" or
+"%0<var>N</var>d", for example to create a single image file
+‘<tt>img.jpeg</tt>’ from the input video you can employ the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.avi -f image2 -vframes 1 img.jpeg
+</pre></td></tr></table>
+
+<p>The image muxer supports the .Y.U.V image file format. This format is
+special in that that each image frame consists of three files, for
+each of the YUV420P components. To read or write this image file format,
+specify the name of the ’.Y’ file. The muxer will automatically open the
+’.U’ and ’.V’ files as required.
+</p>
+<a name="mpegts"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mpegts">13.4 mpegts</a></h2>
+
+<p>MPEG transport stream muxer.
+</p>
+<p>This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
+</p>
+<p>The muxer options are:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>-mpegts_original_network_id <var>number</var></samp>’</dt>
+<dd><p>Set the original_network_id (default 0x0001). This is unique identifier
+of a network in DVB. Its main use is in the unique identification of a
+service through the path Original_Network_ID, Transport_Stream_ID.
+</p></dd>
+<dt> ‘<samp>-mpegts_transport_stream_id <var>number</var></samp>’</dt>
+<dd><p>Set the transport_stream_id (default 0x0001). This identifies a
+transponder in DVB.
+</p></dd>
+<dt> ‘<samp>-mpegts_service_id <var>number</var></samp>’</dt>
+<dd><p>Set the service_id (default 0x0001) also known as program in DVB.
+</p></dd>
+<dt> ‘<samp>-mpegts_pmt_start_pid <var>number</var></samp>’</dt>
+<dd><p>Set the first PID for PMT (default 0x1000, max 0x1f00).
+</p></dd>
+<dt> ‘<samp>-mpegts_start_pid <var>number</var></samp>’</dt>
+<dd><p>Set the first PID for data packets (default 0x0100, max 0x0f00).
+</p></dd>
+</dl>
+
+<p>The recognized metadata settings in mpegts muxer are <code>service_provider</code>
+and <code>service_name</code>. If they are not set the default for
+<code>service_provider</code> is "FFmpeg" and the default for
+<code>service_name</code> is "Service01".
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i file.mpg -acodec copy -vcodec copy \
+ -mpegts_original_network_id 0x1122 \
+ -mpegts_transport_stream_id 0x3344 \
+ -mpegts_service_id 0x5566 \
+ -mpegts_pmt_start_pid 0x1500 \
+ -mpegts_start_pid 0x150 \
+ -metadata service_provider="Some provider" \
+ -metadata service_name="Some Channel" \
+ -y out.ts
+</pre></td></tr></table>
+
+<a name="null"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-null">13.5 null</a></h2>
+
+<p>Null muxer.
+</p>
+<p>This muxer does not generate any output file, it is mainly useful for
+testing or benchmarking purposes.
+</p>
+<p>For example to benchmark decoding with ‘<tt>ffmpeg</tt>’ you can use the
+command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -benchmark -i INPUT -f null out.null
+</pre></td></tr></table>
+
+<p>Note that the above command does not read or write the ‘<tt>out.null</tt>’
+file, but specifying the output file is required by the ‘<tt>ffmpeg</tt>’
+syntax.
+</p>
+<p>Alternatively you can write the command as:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -benchmark -i INPUT -f null -
+</pre></td></tr></table>
+
+<a name="matroska"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-matroska">13.6 matroska</a></h2>
+
+<p>Matroska container muxer.
+</p>
+<p>This muxer implements the matroska and webm container specs.
+</p>
+<p>The recognized metadata settings in this muxer are:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>title=<var>title name</var></samp>’</dt>
+<dd><p>Name provided to a single track
+</p></dd>
+</dl>
+
+<dl compact="compact">
+<dt> ‘<samp>language=<var>language name</var></samp>’</dt>
+<dd><p>Specifies the language of the track in the Matroska languages form
+</p></dd>
+</dl>
+
+<dl compact="compact">
+<dt> ‘<samp>stereo_mode=<var>mode</var></samp>’</dt>
+<dd><p>Stereo 3D video layout of two views in a single video track
+</p><dl compact="compact">
+<dt> ‘<samp>mono</samp>’</dt>
+<dd><p>video is not stereo
+</p></dd>
+<dt> ‘<samp>left_right</samp>’</dt>
+<dd><p>Both views are arranged side by side, Left-eye view is on the left
+</p></dd>
+<dt> ‘<samp>bottom_top</samp>’</dt>
+<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is at bottom
+</p></dd>
+<dt> ‘<samp>top_bottom</samp>’</dt>
+<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is on top
+</p></dd>
+<dt> ‘<samp>checkerboard_rl</samp>’</dt>
+<dd><p>Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
+</p></dd>
+<dt> ‘<samp>checkerboard_lr</samp>’</dt>
+<dd><p>Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
+</p></dd>
+<dt> ‘<samp>row_interleaved_rl</samp>’</dt>
+<dd><p>Each view is constituted by a row based interleaving, Right-eye view is first row
+</p></dd>
+<dt> ‘<samp>row_interleaved_lr</samp>’</dt>
+<dd><p>Each view is constituted by a row based interleaving, Left-eye view is first row
+</p></dd>
+<dt> ‘<samp>col_interleaved_rl</samp>’</dt>
+<dd><p>Both views are arranged in a column based interleaving manner, Right-eye view is first column
+</p></dd>
+<dt> ‘<samp>col_interleaved_lr</samp>’</dt>
+<dd><p>Both views are arranged in a column based interleaving manner, Left-eye view is first column
+</p></dd>
+<dt> ‘<samp>anaglyph_cyan_red</samp>’</dt>
+<dd><p>All frames are in anaglyph format viewable through red-cyan filters
+</p></dd>
+<dt> ‘<samp>right_left</samp>’</dt>
+<dd><p>Both views are arranged side by side, Right-eye view is on the left
+</p></dd>
+<dt> ‘<samp>anaglyph_green_magenta</samp>’</dt>
+<dd><p>All frames are in anaglyph format viewable through green-magenta filters
+</p></dd>
+<dt> ‘<samp>block_lr</samp>’</dt>
+<dd><p>Both eyes laced in one Block, Left-eye view is first
+</p></dd>
+<dt> ‘<samp>block_rl</samp>’</dt>
+<dd><p>Both eyes laced in one Block, Right-eye view is first
+</p></dd>
+</dl>
+</dd>
+</dl>
+
+<p>For example a 3D WebM clip can be created using the following command line:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i sample_left_right_clip.mpg -an -vcodec libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
+</pre></td></tr></table>
+
+<a name="Input-Devices"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Input-Devices">14. Input Devices</a></h1>
+
+<p>Input devices are configured elements in FFmpeg which allow to access
+the data coming from a multimedia device attached to your system.
+</p>
+<p>When you configure your FFmpeg build, all the supported input devices
+are enabled by default. You can list all available ones using the
+configure option "–list-indevs".
+</p>
+<p>You can disable all the input devices using the configure option
+"–disable-indevs", and selectively enable an input device using the
+option "–enable-indev=<var>INDEV</var>", or you can disable a particular
+input device using the option "–disable-indev=<var>INDEV</var>".
+</p>
+<p>The option "-formats" of the ff* tools will display the list of
+supported input devices (amongst the demuxers).
+</p>
+<p>A description of the currently available input devices follows.
+</p>
+<a name="alsa-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-alsa-1">14.1 alsa</a></h2>
+
+<p>ALSA (Advanced Linux Sound Architecture) input device.
+</p>
+<p>To enable this input device during configuration you need libasound
+installed on your system.
+</p>
+<p>This device allows capturing from an ALSA device. The name of the
+device to capture has to be an ALSA card identifier.
+</p>
+<p>An ALSA identifier has the syntax:
+</p><table><tr><td> </td><td><pre class="example">hw:<var>CARD</var>[,<var>DEV</var>[,<var>SUBDEV</var>]]
+</pre></td></tr></table>
+
+<p>where the <var>DEV</var> and <var>SUBDEV</var> components are optional.
+</p>
+<p>The three arguments (in order: <var>CARD</var>,<var>DEV</var>,<var>SUBDEV</var>)
+specify card number or identifier, device number and subdevice number
+(-1 means any).
+</p>
+<p>To see the list of cards currently recognized by your system check the
+files ‘<tt>/proc/asound/cards</tt>’ and ‘<tt>/proc/asound/devices</tt>’.
+</p>
+<p>For example to capture with ‘<tt>ffmpeg</tt>’ from an ALSA device with
+card id 0, you may run the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f alsa -i hw:0 alsaout.wav
+</pre></td></tr></table>
+
+<p>For more information see:
+<a href="http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html">http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html</a>
+</p>
+<a name="bktr"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-bktr">14.2 bktr</a></h2>
+
+<p>BSD video input device.
+</p>
+<a name="dv1394"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-dv1394">14.3 dv1394</a></h2>
+
+<p>Linux DV 1394 input device.
+</p>
+<a name="fbdev"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-fbdev">14.4 fbdev</a></h2>
+
+<p>Linux framebuffer input device.
+</p>
+<p>The Linux framebuffer is a graphic hardware-independent abstraction
+layer to show graphics on a computer monitor, typically on the
+console. It is accessed through a file device node, usually
+‘<tt>/dev/fb0</tt>’.
+</p>
+<p>For more detailed information read the file
+Documentation/fb/framebuffer.txt included in the Linux source tree.
+</p>
+<p>To record from the framebuffer device ‘<tt>/dev/fb0</tt>’ with
+‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f fbdev -r 10 -i /dev/fb0 out.avi
+</pre></td></tr></table>
+
+<p>You can take a single screenshot image with the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f fbdev -vframes 1 -r 1 -i /dev/fb0 screenshot.jpeg
+</pre></td></tr></table>
+
+<p>See also <a href="http://linux-fbdev.sourceforge.net/">http://linux-fbdev.sourceforge.net/</a>, and fbset(1).
+</p>
+<a name="jack"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-jack">14.5 jack</a></h2>
+
+<p>JACK input device.
+</p>
+<p>To enable this input device during configuration you need libjack
+installed on your system.
+</p>
+<p>A JACK input device creates one or more JACK writable clients, one for
+each audio channel, with name <var>client_name</var>:input_<var>N</var>, where
+<var>client_name</var> is the name provided by the application, and <var>N</var>
+is a number which identifies the channel.
+Each writable client will send the acquired data to the FFmpeg input
+device.
+</p>
+<p>Once you have created one or more JACK readable clients, you need to
+connect them to one or more JACK writable clients.
+</p>
+<p>To connect or disconnect JACK clients you can use the
+‘<tt>jack_connect</tt>’ and ‘<tt>jack_disconnect</tt>’ programs, or do it
+through a graphical interface, for example with ‘<tt>qjackctl</tt>’.
+</p>
+<p>To list the JACK clients and their properties you can invoke the command
+‘<tt>jack_lsp</tt>’.
+</p>
+<p>Follows an example which shows how to capture a JACK readable client
+with ‘<tt>ffmpeg</tt>’.
+</p><table><tr><td> </td><td><pre class="example"># Create a JACK writable client with name "ffmpeg".
+$ ffmpeg -f jack -i ffmpeg -y out.wav
+
+# Start the sample jack_metro readable client.
+$ jack_metro -b 120 -d 0.2 -f 4000
+
+# List the current JACK clients.
+$ jack_lsp -c
+system:capture_1
+system:capture_2
+system:playback_1
+system:playback_2
+ffmpeg:input_1
+metro:120_bpm
+
+# Connect metro to the ffmpeg writable client.
+$ jack_connect metro:120_bpm ffmpeg:input_1
+</pre></td></tr></table>
+
+<p>For more information read:
+<a href="http://jackaudio.org/">http://jackaudio.org/</a>
+</p>
+<a name="libdc1394"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-libdc1394">14.6 libdc1394</a></h2>
+
+<p>IIDC1394 input device, based on libdc1394 and libraw1394.
+</p>
+<a name="openal"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-openal">14.7 openal</a></h2>
+
+<p>The OpenAL input device provides audio capture on all systems with a
+working OpenAL 1.1 implementation.
+</p>
+<p>To enable this input device during configuration, you need OpenAL
+headers and libraries installed on your system, and need to configure
+FFmpeg with <code>--enable-openal</code>.
+</p>
+<p>OpenAL headers and libraries should be provided as part of your OpenAL
+implementation, or as an additional download (an SDK). Depending on your
+installation you may need to specify additional flags via the
+<code>--extra-cflags</code> and <code>--extra-ldflags</code> for allowing the build
+system to locate the OpenAL headers and libraries.
+</p>
+<p>An incomplete list of OpenAL implementations follows:
+</p>
+<dl compact="compact">
+<dt> <strong>Creative</strong></dt>
+<dd><p>The official Windows implementation, providing hardware acceleration
+with supported devices and software fallback.
+See <a href="http://openal.org/">http://openal.org/</a>.
+</p></dd>
+<dt> <strong>OpenAL Soft</strong></dt>
+<dd><p>Portable, open source (LGPL) software implementation. Includes
+backends for the most common sound APIs on the Windows, Linux,
+Solaris, and BSD operating systems.
+See <a href="http://kcat.strangesoft.net/openal.html">http://kcat.strangesoft.net/openal.html</a>.
+</p></dd>
+<dt> <strong>Apple</strong></dt>
+<dd><p>OpenAL is part of Core Audio, the official Mac OS X Audio interface.
+See <a href="http://developer.apple.com/technologies/mac/audio-and-video.html">http://developer.apple.com/technologies/mac/audio-and-video.html</a>
+</p></dd>
+</dl>
+
+<p>This device allows to capture from an audio input device handled
+through OpenAL.
+</p>
+<p>You need to specify the name of the device to capture in the provided
+filename. If the empty string is provided, the device will
+automatically select the default device. You can get the list of the
+supported devices by using the option <var>list_devices</var>.
+</p>
+<a name="Options-1"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Options-1">14.7.1 Options</a></h3>
+
+<dl compact="compact">
+<dt> ‘<samp>channels</samp>’</dt>
+<dd><p>Set the number of channels in the captured audio. Only the values
+‘<samp>1</samp>’ (monaural) and ‘<samp>2</samp>’ (stereo) are currently supported.
+Defaults to ‘<samp>2</samp>’.
+</p>
+</dd>
+<dt> ‘<samp>sample_size</samp>’</dt>
+<dd><p>Set the sample size (in bits) of the captured audio. Only the values
+‘<samp>8</samp>’ and ‘<samp>16</samp>’ are currently supported. Defaults to
+‘<samp>16</samp>’.
+</p>
+</dd>
+<dt> ‘<samp>sample_rate</samp>’</dt>
+<dd><p>Set the sample rate (in Hz) of the captured audio.
+Defaults to ‘<samp>44.1k</samp>’.
+</p>
+</dd>
+<dt> ‘<samp>list_devices</samp>’</dt>
+<dd><p>If set to ‘<samp>true</samp>’, print a list of devices and exit.
+Defaults to ‘<samp>false</samp>’.
+</p>
+</dd>
+</dl>
+
+<a name="Examples-2"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Examples-2">14.7.2 Examples</a></h3>
+
+<p>Print the list of OpenAL supported devices and exit:
+</p><table><tr><td> </td><td><pre class="example">$ ffmpeg -list_devices true -f openal -i dummy out.ogg
+</pre></td></tr></table>
+
+<p>Capture from the OpenAL device ‘<tt>DR-BT101 via PulseAudio</tt>’:
+</p><table><tr><td> </td><td><pre class="example">$ ffmpeg -f openal -i 'DR-BT101 via PulseAudio' out.ogg
+</pre></td></tr></table>
+
+<p>Capture from the default device (note the empty string ” as filename):
+</p><table><tr><td> </td><td><pre class="example">$ ffmpeg -f openal -i '' out.ogg
+</pre></td></tr></table>
+
+<p>Capture from two devices simultaneously, writing to two different files,
+within the same ‘<tt>ffmpeg</tt>’ command:
+</p><table><tr><td> </td><td><pre class="example">$ ffmpeg -f openal -i 'DR-BT101 via PulseAudio' out1.ogg -f openal -i 'ALSA Default' out2.ogg
+</pre></td></tr></table>
+<p>Note: not all OpenAL implementations support multiple simultaneous capture -
+try the latest OpenAL Soft if the above does not work.
+</p>
+<a name="oss"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-oss">14.8 oss</a></h2>
+
+<p>Open Sound System input device.
+</p>
+<p>The filename to provide to the input device is the device node
+representing the OSS input device, and is usually set to
+‘<tt>/dev/dsp</tt>’.
+</p>
+<p>For example to grab from ‘<tt>/dev/dsp</tt>’ using ‘<tt>ffmpeg</tt>’ use the
+command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f oss -i /dev/dsp /tmp/oss.wav
+</pre></td></tr></table>
+
+<p>For more information about OSS see:
+<a href="http://manuals.opensound.com/usersguide/dsp.html">http://manuals.opensound.com/usersguide/dsp.html</a>
+</p>
+<a name="sndio-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-sndio-1">14.9 sndio</a></h2>
+
+<p>sndio input device.
+</p>
+<p>To enable this input device during configuration you need libsndio
+installed on your system.
+</p>
+<p>The filename to provide to the input device is the device node
+representing the sndio input device, and is usually set to
+‘<tt>/dev/audio0</tt>’.
+</p>
+<p>For example to grab from ‘<tt>/dev/audio0</tt>’ using ‘<tt>ffmpeg</tt>’ use the
+command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f sndio -i /dev/audio0 /tmp/oss.wav
+</pre></td></tr></table>
+
+<a name="video4linux-and-video4linux2"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-video4linux-and-video4linux2">14.10 video4linux and video4linux2</a></h2>
+
+<p>Video4Linux and Video4Linux2 input video devices.
+</p>
+<p>The name of the device to grab is a file device node, usually Linux
+systems tend to automatically create such nodes when the device
+(e.g. an USB webcam) is plugged into the system, and has a name of the
+kind ‘<tt>/dev/video<var>N</var></tt>’, where <var>N</var> is a number associated to
+the device.
+</p>
+<p>Video4Linux and Video4Linux2 devices only support a limited set of
+<var>width</var>x<var>height</var> sizes and framerates. You can check which are
+supported for example with the command ‘<tt>dov4l</tt>’ for Video4Linux
+devices and the command ‘<tt>v4l-info</tt>’ for Video4Linux2 devices.
+</p>
+<p>If the size for the device is set to 0x0, the input device will
+try to autodetect the size to use.
+Only for the video4linux2 device, if the frame rate is set to 0/0 the
+input device will use the frame rate value already set in the driver.
+</p>
+<p>Video4Linux support is deprecated since Linux 2.6.30, and will be
+dropped in later versions.
+</p>
+<p>Follow some usage examples of the video4linux devices with the ff*
+tools.
+</p><table><tr><td> </td><td><pre class="example"># Grab and show the input of a video4linux device, frame rate is set
+# to the default of 25/1.
+ffplay -s 320x240 -f video4linux /dev/video0
+
+# Grab and show the input of a video4linux2 device, autoadjust size.
+ffplay -f video4linux2 /dev/video0
+
+# Grab and record the input of a video4linux2 device, autoadjust size,
+# frame rate value defaults to 0/0 so it is read from the video4linux2
+# driver.
+ffmpeg -f video4linux2 -i /dev/video0 out.mpeg
+</pre></td></tr></table>
+
+<a name="vfwcap"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-vfwcap">14.11 vfwcap</a></h2>
+
+<p>VfW (Video for Windows) capture input device.
+</p>
+<p>The filename passed as input is the capture driver number, ranging from
+0 to 9. You may use "list" as filename to print a list of drivers. Any
+other filename will be interpreted as device number 0.
+</p>
+<a name="x11grab"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-x11grab">14.12 x11grab</a></h2>
+
+<p>X11 video input device.
+</p>
+<p>This device allows to capture a region of an X11 display.
+</p>
+<p>The filename passed as input has the syntax:
+</p><table><tr><td> </td><td><pre class="example">[<var>hostname</var>]:<var>display_number</var>.<var>screen_number</var>[+<var>x_offset</var>,<var>y_offset</var>]
+</pre></td></tr></table>
+
+<p><var>hostname</var>:<var>display_number</var>.<var>screen_number</var> specifies the
+X11 display name of the screen to grab from. <var>hostname</var> can be
+ommitted, and defaults to "localhost". The environment variable
+<code>DISPLAY</code> contains the default display name.
+</p>
+<p><var>x_offset</var> and <var>y_offset</var> specify the offsets of the grabbed
+area with respect to the top-left border of the X11 screen. They
+default to 0.
+</p>
+<p>Check the X11 documentation (e.g. man X) for more detailed information.
+</p>
+<p>Use the ‘<tt>dpyinfo</tt>’ program for getting basic information about the
+properties of your X11 display (e.g. grep for "name" or "dimensions").
+</p>
+<p>For example to grab from ‘<tt>:0.0</tt>’ using ‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -f x11grab -r 25 -s cif -i :0.0 out.mpg
+
+# Grab at position 10,20.
+ffmpeg -f x11grab -25 -s cif -i :0.0+10,20 out.mpg
+</pre></td></tr></table>
+
+<a name="Output-Devices"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Output-Devices">15. Output Devices</a></h1>
+
+<p>Output devices are configured elements in FFmpeg which allow to write
+multimedia data to an output device attached to your system.
+</p>
+<p>When you configure your FFmpeg build, all the supported output devices
+are enabled by default. You can list all available ones using the
+configure option "–list-outdevs".
+</p>
+<p>You can disable all the output devices using the configure option
+"–disable-outdevs", and selectively enable an output device using the
+option "–enable-outdev=<var>OUTDEV</var>", or you can disable a particular
+input device using the option "–disable-outdev=<var>OUTDEV</var>".
+</p>
+<p>The option "-formats" of the ff* tools will display the list of
+enabled output devices (amongst the muxers).
+</p>
+<p>A description of the currently available output devices follows.
+</p>
+<a name="alsa"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-alsa">15.1 alsa</a></h2>
+
+<p>ALSA (Advanced Linux Sound Architecture) output device.
+</p>
+<a name="oss-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-oss-1">15.2 oss</a></h2>
+
+<p>OSS (Open Sound System) output device.
+</p>
+<a name="sdl"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-sdl">15.3 sdl</a></h2>
+
+<p>SDL (Simple Directmedia Layer) output device.
+</p>
+<p>This output devices allows to show a video stream in an SDL
+window. Only one SDL window is allowed per application, so you can
+have only one instance of this output device in an application.
+</p>
+<p>To enable this output device you need libsdl installed on your system
+when configuring your build.
+</p>
+<p>For more information about SDL, check:
+<a href="http://www.libsdl.org/">http://www.libsdl.org/</a>
+</p>
+<a name="Options"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Options">15.3.1 Options</a></h3>
+
+<dl compact="compact">
+<dt> ‘<samp>window_title</samp>’</dt>
+<dd><p>Set the SDL window title, if not specified default to the filename
+specified for the output device.
+</p>
+</dd>
+<dt> ‘<samp>icon_title</samp>’</dt>
+<dd><p>Set the name of the iconified SDL window, if not specified it is set
+to the same value of <var>window_title</var>.
+</p>
+</dd>
+<dt> ‘<samp>window_size</samp>’</dt>
+<dd><p>Set the SDL window size, can be a string of the form
+<var>width</var>x<var>height</var> or a video size abbreviation.
+If not specified it defaults to the size of the input video.
+</p></dd>
+</dl>
+
+<a name="Examples-1"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Examples-1">15.3.2 Examples</a></h3>
+
+<p>The following command shows the ‘<tt>ffmpeg</tt>’ output is an
+SDL window, forcing its size to the qcif format:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i INPUT -vcodec rawvideo -pix_fmt yuv420p -window_size qcif -f sdl "SDL output"
+</pre></td></tr></table>
+
+<a name="sndio"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-sndio">15.4 sndio</a></h2>
+
+<p>sndio audio output device.
+</p>
+<a name="Protocols"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Protocols">16. Protocols</a></h1>
+
+<p>Protocols are configured elements in FFmpeg which allow to access
+resources which require the use of a particular protocol.
+</p>
+<p>When you configure your FFmpeg build, all the supported protocols are
+enabled by default. You can list all available ones using the
+configure option "–list-protocols".
+</p>
+<p>You can disable all the protocols using the configure option
+"–disable-protocols", and selectively enable a protocol using the
+option "–enable-protocol=<var>PROTOCOL</var>", or you can disable a
+particular protocol using the option
+"–disable-protocol=<var>PROTOCOL</var>".
+</p>
+<p>The option "-protocols" of the ff* tools will display the list of
+supported protocols.
+</p>
+<p>A description of the currently available protocols follows.
+</p>
+<a name="applehttp-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-applehttp-1">16.1 applehttp</a></h2>
+
+<p>Read Apple HTTP Live Streaming compliant segmented stream as
+a uniform one. The M3U8 playlists describing the segments can be
+remote HTTP resources or local files, accessed using the standard
+file protocol.
+HTTP is default, specific protocol can be declared by specifying
+"+<var>proto</var>" after the applehttp URI scheme name, where <var>proto</var>
+is either "file" or "http".
+</p>
+<table><tr><td> </td><td><pre class="example">applehttp://host/path/to/remote/resource.m3u8
+applehttp+http://host/path/to/remote/resource.m3u8
+applehttp+file://path/to/local/resource.m3u8
+</pre></td></tr></table>
+
+<a name="concat"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-concat">16.2 concat</a></h2>
+
+<p>Physical concatenation protocol.
+</p>
+<p>Allow to read and seek from many resource in sequence as if they were
+a unique resource.
+</p>
+<p>A URL accepted by this protocol has the syntax:
+</p><table><tr><td> </td><td><pre class="example">concat:<var>URL1</var>|<var>URL2</var>|...|<var>URLN</var>
+</pre></td></tr></table>
+
+<p>where <var>URL1</var>, <var>URL2</var>, ..., <var>URLN</var> are the urls of the
+resource to be concatenated, each one possibly specifying a distinct
+protocol.
+</p>
+<p>For example to read a sequence of files ‘<tt>split1.mpeg</tt>’,
+‘<tt>split2.mpeg</tt>’, ‘<tt>split3.mpeg</tt>’ with ‘<tt>ffplay</tt>’ use the
+command:
+</p><table><tr><td> </td><td><pre class="example">ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
+</pre></td></tr></table>
+
+<p>Note that you may need to escape the character "|" which is special for
+many shells.
+</p>
+<a name="file"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-file">16.3 file</a></h2>
+
+<p>File access protocol.
+</p>
+<p>Allow to read from or read to a file.
+</p>
+<p>For example to read from a file ‘<tt>input.mpeg</tt>’ with ‘<tt>ffmpeg</tt>’
+use the command:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i file:input.mpeg output.mpeg
+</pre></td></tr></table>
+
+<p>The ff* tools default to the file protocol, that is a resource
+specified with the name "FILE.mpeg" is interpreted as the URL
+"file:FILE.mpeg".
+</p>
+<a name="gopher"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-gopher">16.4 gopher</a></h2>
+
+<p>Gopher protocol.
+</p>
+<a name="http"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-http">16.5 http</a></h2>
+
+<p>HTTP (Hyper Text Transfer Protocol).
+</p>
+<a name="mmst"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mmst">16.6 mmst</a></h2>
+
+<p>MMS (Microsoft Media Server) protocol over TCP.
+</p>
+<a name="mmsh"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mmsh">16.7 mmsh</a></h2>
+
+<p>MMS (Microsoft Media Server) protocol over HTTP.
+</p>
+<p>The required syntax is:
+</p><table><tr><td> </td><td><pre class="example">mmsh://<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>playpath</var>]
+</pre></td></tr></table>
+
+<a name="md5"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-md5">16.8 md5</a></h2>
+
+<p>MD5 output protocol.
+</p>
+<p>Computes the MD5 hash of the data to be written, and on close writes
+this to the designated output or stdout if none is specified. It can
+be used to test muxers without writing an actual file.
+</p>
+<p>Some examples follow.
+</p><table><tr><td> </td><td><pre class="example"># Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
+ffmpeg -i input.flv -f avi -y md5:output.avi.md5
+
+# Write the MD5 hash of the encoded AVI file to stdout.
+ffmpeg -i input.flv -f avi -y md5:
+</pre></td></tr></table>
+
+<p>Note that some formats (typically MOV) require the output protocol to
+be seekable, so they will fail with the MD5 output protocol.
+</p>
+<a name="pipe"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-pipe">16.9 pipe</a></h2>
+
+<p>UNIX pipe access protocol.
+</p>
+<p>Allow to read and write from UNIX pipes.
+</p>
+<p>The accepted syntax is:
+</p><table><tr><td> </td><td><pre class="example">pipe:[<var>number</var>]
+</pre></td></tr></table>
+
+<p><var>number</var> is the number corresponding to the file descriptor of the
+pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If <var>number</var>
+is not specified, by default the stdout file descriptor will be used
+for writing, stdin for reading.
+</p>
+<p>For example to read from stdin with ‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">cat test.wav | ffmpeg -i pipe:0
+# ...this is the same as...
+cat test.wav | ffmpeg -i pipe:
+</pre></td></tr></table>
+
+<p>For writing to stdout with ‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
+# ...this is the same as...
+ffmpeg -i test.wav -f avi pipe: | cat > test.avi
+</pre></td></tr></table>
+
+<p>Note that some formats (typically MOV), require the output protocol to
+be seekable, so they will fail with the pipe output protocol.
+</p>
+<a name="rtmp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rtmp">16.10 rtmp</a></h2>
+
+<p>Real-Time Messaging Protocol.
+</p>
+<p>The Real-Time Messaging Protocol (RTMP) is used for streaming multime‐
+dia content across a TCP/IP network.
+</p>
+<p>The required syntax is:
+</p><table><tr><td> </td><td><pre class="example">rtmp://<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>playpath</var>]
+</pre></td></tr></table>
+
+<p>The accepted parameters are:
+</p><dl compact="compact">
+<dt> ‘<samp>server</samp>’</dt>
+<dd><p>The address of the RTMP server.
+</p>
+</dd>
+<dt> ‘<samp>port</samp>’</dt>
+<dd><p>The number of the TCP port to use (by default is 1935).
+</p>
+</dd>
+<dt> ‘<samp>app</samp>’</dt>
+<dd><p>It is the name of the application to access. It usually corresponds to
+the path where the application is installed on the RTMP server
+(e.g. ‘<tt>/ondemand/</tt>’, ‘<tt>/flash/live/</tt>’, etc.).
+</p>
+</dd>
+<dt> ‘<samp>playpath</samp>’</dt>
+<dd><p>It is the path or name of the resource to play with reference to the
+application specified in <var>app</var>, may be prefixed by "mp4:".
+</p>
+</dd>
+</dl>
+
+<p>For example to read with ‘<tt>ffplay</tt>’ a multimedia resource named
+"sample" from the application "vod" from an RTMP server "myserver":
+</p><table><tr><td> </td><td><pre class="example">ffplay rtmp://myserver/vod/sample
+</pre></td></tr></table>
+
+<a name="rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte">16.11 rtmp, rtmpe, rtmps, rtmpt, rtmpte</a></h2>
+
+<p>Real-Time Messaging Protocol and its variants supported through
+librtmp.
+</p>
+<p>Requires the presence of the librtmp headers and library during
+configuration. You need to explicitely configure the build with
+"–enable-librtmp". If enabled this will replace the native RTMP
+protocol.
+</p>
+<p>This protocol provides most client functions and a few server
+functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
+encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
+variants of these encrypted types (RTMPTE, RTMPTS).
+</p>
+<p>The required syntax is:
+</p><table><tr><td> </td><td><pre class="example"><var>rtmp_proto</var>://<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>playpath</var>] <var>options</var>
+</pre></td></tr></table>
+
+<p>where <var>rtmp_proto</var> is one of the strings "rtmp", "rtmpt", "rtmpe",
+"rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
+<var>server</var>, <var>port</var>, <var>app</var> and <var>playpath</var> have the same
+meaning as specified for the RTMP native protocol.
+<var>options</var> contains a list of space-separated options of the form
+<var>key</var>=<var>val</var>.
+</p>
+<p>See the librtmp manual page (man 3 librtmp) for more information.
+</p>
+<p>For example, to stream a file in real-time to an RTMP server using
+‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
+</pre></td></tr></table>
+
+<p>To play the same stream using ‘<tt>ffplay</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffplay "rtmp://myserver/live/mystream live=1"
+</pre></td></tr></table>
+
+<a name="rtp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rtp">16.12 rtp</a></h2>
+
+<p>Real-Time Protocol.
+</p>
+<a name="rtsp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rtsp">16.13 rtsp</a></h2>
+
+<p>RTSP is not technically a protocol handler in libavformat, it is a demuxer
+and muxer. The demuxer supports both normal RTSP (with data transferred
+over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
+data transferred over RDT).
+</p>
+<p>The muxer can be used to send a stream using RTSP ANNOUNCE to a server
+supporting it (currently Darwin Streaming Server and Mischa Spiegelmock’s
+<a href="http://github.com/revmischa/rtsp-server">RTSP server</a>).
+</p>
+<p>The required syntax for a RTSP url is:
+</p><table><tr><td> </td><td><pre class="example">rtsp://<var>hostname</var>[:<var>port</var>]/<var>path</var>[?<var>options</var>]
+</pre></td></tr></table>
+
+<p><var>options</var> is a <code>&</code>-separated list. The following options
+are supported:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>udp</samp>’</dt>
+<dd><p>Use UDP as lower transport protocol.
+</p>
+</dd>
+<dt> ‘<samp>tcp</samp>’</dt>
+<dd><p>Use TCP (interleaving within the RTSP control channel) as lower
+transport protocol.
+</p>
+</dd>
+<dt> ‘<samp>multicast</samp>’</dt>
+<dd><p>Use UDP multicast as lower transport protocol.
+</p>
+</dd>
+<dt> ‘<samp>http</samp>’</dt>
+<dd><p>Use HTTP tunneling as lower transport protocol, which is useful for
+passing proxies.
+</p>
+</dd>
+<dt> ‘<samp>filter_src</samp>’</dt>
+<dd><p>Accept packets only from negotiated peer address and port.
+</p></dd>
+</dl>
+
+<p>Multiple lower transport protocols may be specified, in that case they are
+tried one at a time (if the setup of one fails, the next one is tried).
+For the muxer, only the <code>tcp</code> and <code>udp</code> options are supported.
+</p>
+<p>When receiving data over UDP, the demuxer tries to reorder received packets
+(since they may arrive out of order, or packets may get lost totally). In
+order for this to be enabled, a maximum delay must be specified in the
+<code>max_delay</code> field of AVFormatContext.
+</p>
+<p>When watching multi-bitrate Real-RTSP streams with ‘<tt>ffplay</tt>’, the
+streams to display can be chosen with <code>-vst</code> <var>n</var> and
+<code>-ast</code> <var>n</var> for video and audio respectively, and can be switched
+on the fly by pressing <code>v</code> and <code>a</code>.
+</p>
+<p>Example command lines:
+</p>
+<p>To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
+</p>
+<table><tr><td> </td><td><pre class="example">ffplay -max_delay 500000 rtsp://server/video.mp4?udp
+</pre></td></tr></table>
+
+<p>To watch a stream tunneled over HTTP:
+</p>
+<table><tr><td> </td><td><pre class="example">ffplay rtsp://server/video.mp4?http
+</pre></td></tr></table>
+
+<p>To send a stream in realtime to a RTSP server, for others to watch:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -re -i <var>input</var> -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
+</pre></td></tr></table>
+
+<a name="sap"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-sap">16.14 sap</a></h2>
+
+<p>Session Announcement Protocol (RFC 2974). This is not technically a
+protocol handler in libavformat, it is a muxer and demuxer.
+It is used for signalling of RTP streams, by announcing the SDP for the
+streams regularly on a separate port.
+</p>
+<a name="Muxer"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Muxer">16.14.1 Muxer</a></h3>
+
+<p>The syntax for a SAP url given to the muxer is:
+</p><table><tr><td> </td><td><pre class="example">sap://<var>destination</var>[:<var>port</var>][?<var>options</var>]
+</pre></td></tr></table>
+
+<p>The RTP packets are sent to <var>destination</var> on port <var>port</var>,
+or to port 5004 if no port is specified.
+<var>options</var> is a <code>&</code>-separated list. The following options
+are supported:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>announce_addr=<var>address</var></samp>’</dt>
+<dd><p>Specify the destination IP address for sending the announcements to.
+If omitted, the announcements are sent to the commonly used SAP
+announcement multicast address 224.2.127.254 (sap.mcast.net), or
+ff0e::2:7ffe if <var>destination</var> is an IPv6 address.
+</p>
+</dd>
+<dt> ‘<samp>announce_port=<var>port</var></samp>’</dt>
+<dd><p>Specify the port to send the announcements on, defaults to
+9875 if not specified.
+</p>
+</dd>
+<dt> ‘<samp>ttl=<var>ttl</var></samp>’</dt>
+<dd><p>Specify the time to live value for the announcements and RTP packets,
+defaults to 255.
+</p>
+</dd>
+<dt> ‘<samp>same_port=<var>0|1</var></samp>’</dt>
+<dd><p>If set to 1, send all RTP streams on the same port pair. If zero (the
+default), all streams are sent on unique ports, with each stream on a
+port 2 numbers higher than the previous.
+VLC/Live555 requires this to be set to 1, to be able to receive the stream.
+The RTP stack in libavformat for receiving requires all streams to be sent
+on unique ports.
+</p></dd>
+</dl>
+
+<p>Example command lines follow.
+</p>
+<p>To broadcast a stream on the local subnet, for watching in VLC:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://224.0.0.255?same_port=1
+</pre></td></tr></table>
+
+<p>Similarly, for watching in ffplay:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://224.0.0.255
+</pre></td></tr></table>
+
+<p>And for watching in ffplay, over IPv6:
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://[ff0e::1:2:3:4]
+</pre></td></tr></table>
+
+<a name="Demuxer"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-Demuxer">16.14.2 Demuxer</a></h3>
+
+<p>The syntax for a SAP url given to the demuxer is:
+</p><table><tr><td> </td><td><pre class="example">sap://[<var>address</var>][:<var>port</var>]
+</pre></td></tr></table>
+
+<p><var>address</var> is the multicast address to listen for announcements on,
+if omitted, the default 224.2.127.254 (sap.mcast.net) is used. <var>port</var>
+is the port that is listened on, 9875 if omitted.
+</p>
+<p>The demuxers listens for announcements on the given address and port.
+Once an announcement is received, it tries to receive that particular stream.
+</p>
+<p>Example command lines follow.
+</p>
+<p>To play back the first stream announced on the normal SAP multicast address:
+</p>
+<table><tr><td> </td><td><pre class="example">ffplay sap://
+</pre></td></tr></table>
+
+<p>To play back the first stream announced on one the default IPv6 SAP multicast address:
+</p>
+<table><tr><td> </td><td><pre class="example">ffplay sap://[ff0e::2:7ffe]
+</pre></td></tr></table>
+
+<a name="tcp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-tcp">16.15 tcp</a></h2>
+
+<p>Trasmission Control Protocol.
+</p>
+<p>The required syntax for a TCP url is:
+</p><table><tr><td> </td><td><pre class="example">tcp://<var>hostname</var>:<var>port</var>[?<var>options</var>]
+</pre></td></tr></table>
+
+<dl compact="compact">
+<dt> ‘<samp>listen</samp>’</dt>
+<dd><p>Listen for an incoming connection
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i <var>input</var> -f <var>format</var> tcp://<var>hostname</var>:<var>port</var>?listen
+ffplay tcp://<var>hostname</var>:<var>port</var>
+</pre></td></tr></table>
+
+</dd>
+</dl>
+
+<a name="udp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-udp">16.16 udp</a></h2>
+
+<p>User Datagram Protocol.
+</p>
+<p>The required syntax for a UDP url is:
+</p><table><tr><td> </td><td><pre class="example">udp://<var>hostname</var>:<var>port</var>[?<var>options</var>]
+</pre></td></tr></table>
+
+<p><var>options</var> contains a list of &-seperated options of the form <var>key</var>=<var>val</var>.
+Follow the list of supported options.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>buffer_size=<var>size</var></samp>’</dt>
+<dd><p>set the UDP buffer size in bytes
+</p>
+</dd>
+<dt> ‘<samp>localport=<var>port</var></samp>’</dt>
+<dd><p>override the local UDP port to bind with
+</p>
+</dd>
+<dt> ‘<samp>pkt_size=<var>size</var></samp>’</dt>
+<dd><p>set the size in bytes of UDP packets
+</p>
+</dd>
+<dt> ‘<samp>reuse=<var>1|0</var></samp>’</dt>
+<dd><p>explicitly allow or disallow reusing UDP sockets
+</p>
+</dd>
+<dt> ‘<samp>ttl=<var>ttl</var></samp>’</dt>
+<dd><p>set the time to live value (for multicast only)
+</p>
+</dd>
+<dt> ‘<samp>connect=<var>1|0</var></samp>’</dt>
+<dd><p>Initialize the UDP socket with <code>connect()</code>. In this case, the
+destination address can’t be changed with ff_udp_set_remote_url later.
+If the destination address isn’t known at the start, this option can
+be specified in ff_udp_set_remote_url, too.
+This allows finding out the source address for the packets with getsockname,
+and makes writes return with AVERROR(ECONNREFUSED) if "destination
+unreachable" is received.
+For receiving, this gives the benefit of only receiving packets from
+the specified peer address/port.
+</p></dd>
+</dl>
+
+<p>Some usage examples of the udp protocol with ‘<tt>ffmpeg</tt>’ follow.
+</p>
+<p>To stream over UDP to a remote endpoint:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i <var>input</var> -f <var>format</var> udp://<var>hostname</var>:<var>port</var>
+</pre></td></tr></table>
+
+<p>To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i <var>input</var> -f mpegts udp://<var>hostname</var>:<var>port</var>?pkt_size=188&buffer_size=65535
+</pre></td></tr></table>
+
+<p>To receive over UDP from a remote endpoint:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i udp://[<var>multicast-address</var>]:<var>port</var>
+</pre></td></tr></table>
+
+<a name="Bitstream-Filters"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Bitstream-Filters">17. Bitstream Filters</a></h1>
+
+<p>When you configure your FFmpeg build, all the supported bitstream
+filters are enabled by default. You can list all available ones using
+the configure option <code>--list-bsfs</code>.
+</p>
+<p>You can disable all the bitstream filters using the configure option
+<code>--disable-bsfs</code>, and selectively enable any bitstream filter using
+the option <code>--enable-bsf=BSF</code>, or you can disable a particular
+bitstream filter using the option <code>--disable-bsf=BSF</code>.
+</p>
+<p>The option <code>-bsfs</code> of the ff* tools will display the list of
+all the supported bitstream filters included in your build.
+</p>
+<p>Below is a description of the currently available bitstream filters.
+</p>
+<a name="aac_005fadtstoasc"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-aac_005fadtstoasc">17.1 aac_adtstoasc</a></h2>
+
+<a name="chomp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-chomp">17.2 chomp</a></h2>
+
+<a name="dump_005fextradata"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-dump_005fextradata">17.3 dump_extradata</a></h2>
+
+<a name="h264_005fmp4toannexb"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-h264_005fmp4toannexb">17.4 h264_mp4toannexb</a></h2>
+
+<a name="imx_005fdump_005fheader"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-imx_005fdump_005fheader">17.5 imx_dump_header</a></h2>
+
+<a name="mjpeg2jpeg"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mjpeg2jpeg">17.6 mjpeg2jpeg</a></h2>
+
+<p>Convert MJPEG/AVI1 packets to full JPEG/JFIF packets.
+</p>
+<p>MJPEG is a video codec wherein each video frame is essentially a
+JPEG image. The individual frames can be extracted without loss,
+e.g. by
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i ../some_mjpeg.avi -vcodec copy frames_%d.jpg
+</pre></td></tr></table>
+
+<p>Unfortunately, these chunks are incomplete JPEG images, because
+they lack the DHT segment required for decoding. Quoting from
+<a href="http://www.digitalpreservation.gov/formats/fdd/fdd000063.shtml">http://www.digitalpreservation.gov/formats/fdd/fdd000063.shtml</a>:
+</p>
+<p>Avery Lee, writing in the rec.video.desktop newsgroup in 2001,
+commented that "MJPEG, or at least the MJPEG in AVIs having the
+MJPG fourcc, is restricted JPEG with a fixed – and *omitted* –
+Huffman table. The JPEG must be YCbCr colorspace, it must be 4:2:2,
+and it must use basic Huffman encoding, not arithmetic or
+progressive. . . . You can indeed extract the MJPEG frames and
+decode them with a regular JPEG decoder, but you have to prepend
+the DHT segment to them, or else the decoder won’t have any idea
+how to decompress the data. The exact table necessary is given in
+the OpenDML spec."
+</p>
+<p>This bitstream filter patches the header of frames extracted from an MJPEG
+stream (carrying the AVI1 header ID and lacking a DHT segment) to
+produce fully qualified JPEG images.
+</p>
+<table><tr><td> </td><td><pre class="example">ffmpeg -i mjpeg-movie.avi -vcodec copy -vbsf mjpeg2jpeg frame_%d.jpg
+exiftran -i -9 frame*.jpg
+ffmpeg -i frame_%d.jpg -vcodec copy rotated.avi
+</pre></td></tr></table>
+
+<a name="mjpega_005fdump_005fheader"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mjpega_005fdump_005fheader">17.7 mjpega_dump_header</a></h2>
+
+<a name="movsub"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-movsub">17.8 movsub</a></h2>
+
+<a name="mp3_005fheader_005fcompress"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mp3_005fheader_005fcompress">17.9 mp3_header_compress</a></h2>
+
+<a name="mp3_005fheader_005fdecompress"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mp3_005fheader_005fdecompress">17.10 mp3_header_decompress</a></h2>
+
+<a name="noise"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-noise">17.11 noise</a></h2>
+
+<a name="remove_005fextradata"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-remove_005fextradata">17.12 remove_extradata</a></h2>
+
+<a name="Filtergraph-description"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Filtergraph-description">18. Filtergraph description</a></h1>
+
+<p>A filtergraph is a directed graph of connected filters. It can contain
+cycles, and there can be multiple links between a pair of
+filters. Each link has one input pad on one side connecting it to one
+filter from which it takes its input, and one output pad on the other
+side connecting it to the one filter accepting its output.
+</p>
+<p>Each filter in a filtergraph is an instance of a filter class
+registered in the application, which defines the features and the
+number of input and output pads of the filter.
+</p>
+<p>A filter with no input pads is called a "source", a filter with no
+output pads is called a "sink".
+</p>
+<a name="Filtergraph-syntax"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-Filtergraph-syntax">18.1 Filtergraph syntax</a></h2>
+
+<p>A filtergraph can be represented using a textual representation, which
+is recognized by the <code>-vf</code> and <code>-af</code> options of the ff*
+tools, and by the <code>av_parse_graph()</code> function defined in
+‘<tt>libavfilter/avfiltergraph</tt>’.
+</p>
+<p>A filterchain consists of a sequence of connected filters, each one
+connected to the previous one in the sequence. A filterchain is
+represented by a list of ","-separated filter descriptions.
+</p>
+<p>A filtergraph consists of a sequence of filterchains. A sequence of
+filterchains is represented by a list of ";"-separated filterchain
+descriptions.
+</p>
+<p>A filter is represented by a string of the form:
+[<var>in_link_1</var>]...[<var>in_link_N</var>]<var>filter_name</var>=<var>arguments</var>[<var>out_link_1</var>]...[<var>out_link_M</var>]
+</p>
+<p><var>filter_name</var> is the name of the filter class of which the
+described filter is an instance of, and has to be the name of one of
+the filter classes registered in the program.
+The name of the filter class is optionally followed by a string
+"=<var>arguments</var>".
+</p>
+<p><var>arguments</var> is a string which contains the parameters used to
+initialize the filter instance, and are described in the filter
+descriptions below.
+</p>
+<p>The list of arguments can be quoted using the character "’" as initial
+and ending mark, and the character ’\’ for escaping the characters
+within the quoted text; otherwise the argument string is considered
+terminated when the next special character (belonging to the set
+"[]=;,") is encountered.
+</p>
+<p>The name and arguments of the filter are optionally preceded and
+followed by a list of link labels.
+A link label allows to name a link and associate it to a filter output
+or input pad. The preceding labels <var>in_link_1</var>
+... <var>in_link_N</var>, are associated to the filter input pads,
+the following labels <var>out_link_1</var> ... <var>out_link_M</var>, are
+associated to the output pads.
+</p>
+<p>When two link labels with the same name are found in the
+filtergraph, a link between the corresponding input and output pad is
+created.
+</p>
+<p>If an output pad is not labelled, it is linked by default to the first
+unlabelled input pad of the next filter in the filterchain.
+For example in the filterchain:
+</p><table><tr><td> </td><td><pre class="example">nullsrc, split[L1], [L2]overlay, nullsink
+</pre></td></tr></table>
+<p>the split filter instance has two output pads, and the overlay filter
+instance two input pads. The first output pad of split is labelled
+"L1", the first input pad of overlay is labelled "L2", and the second
+output pad of split is linked to the second input pad of overlay,
+which are both unlabelled.
+</p>
+<p>In a complete filterchain all the unlabelled filter input and output
+pads must be connected. A filtergraph is considered valid if all the
+filter input and output pads of all the filterchains are connected.
+</p>
+<p>Follows a BNF description for the filtergraph syntax:
+</p><table><tr><td> </td><td><pre class="example"><var>NAME</var> ::= sequence of alphanumeric characters and '_'
+<var>LINKLABEL</var> ::= "[" <var>NAME</var> "]"
+<var>LINKLABELS</var> ::= <var>LINKLABEL</var> [<var>LINKLABELS</var>]
+<var>FILTER_ARGUMENTS</var> ::= sequence of chars (eventually quoted)
+<var>FILTER</var> ::= [<var>LINKNAMES</var>] <var>NAME</var> ["=" <var>ARGUMENTS</var>] [<var>LINKNAMES</var>]
+<var>FILTERCHAIN</var> ::= <var>FILTER</var> [,<var>FILTERCHAIN</var>]
+<var>FILTERGRAPH</var> ::= <var>FILTERCHAIN</var> [;<var>FILTERGRAPH</var>]
+</pre></td></tr></table>
+
+
+<a name="Audio-Filters"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Audio-Filters">19. Audio Filters</a></h1>
+
+<p>When you configure your FFmpeg build, you can disable any of the
+existing filters using –disable-filters.
+The configure output will show the audio filters included in your
+build.
+</p>
+<p>Below is a description of the currently available audio filters.
+</p>
+<a name="anull"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-anull">19.1 anull</a></h2>
+
+<p>Pass the audio source unchanged to the output.
+</p>
+
+<a name="Audio-Sources"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Audio-Sources">20. Audio Sources</a></h1>
+
+<p>Below is a description of the currently available audio sources.
+</p>
+<a name="anullsrc"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-anullsrc">20.1 anullsrc</a></h2>
+
+<p>Null audio source, never return audio frames. It is mainly useful as a
+template and to be employed in analysis / debugging tools.
+</p>
+<p>It accepts as optional parameter a string of the form
+<var>sample_rate</var>:<var>channel_layout</var>.
+</p>
+<p><var>sample_rate</var> specify the sample rate, and defaults to 44100.
+</p>
+<p><var>channel_layout</var> specify the channel layout, and can be either an
+integer or a string representing a channel layout. The default value
+of <var>channel_layout</var> is 3, which corresponds to CH_LAYOUT_STEREO.
+</p>
+<p>Check the channel_layout_map definition in
+‘<tt>libavcodec/audioconvert.c</tt>’ for the mapping between strings and
+channel layout values.
+</p>
+<p>Follow some examples:
+</p><table><tr><td> </td><td><pre class="example"># set the sample rate to 48000 Hz and the channel layout to CH_LAYOUT_MONO.
+anullsrc=48000:4
+
+# same as
+anullsrc=48000:mono
+</pre></td></tr></table>
+
+
+<a name="Audio-Sinks"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Audio-Sinks">21. Audio Sinks</a></h1>
+
+<p>Below is a description of the currently available audio sinks.
+</p>
+<a name="anullsink"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-anullsink">21.1 anullsink</a></h2>
+
+<p>Null audio sink, do absolutely nothing with the input audio. It is
+mainly useful as a template and to be employed in analysis / debugging
+tools.
+</p>
+
+<a name="Video-Filters"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Video-Filters">22. Video Filters</a></h1>
+
+<p>When you configure your FFmpeg build, you can disable any of the
+existing filters using –disable-filters.
+The configure output will show the video filters included in your
+build.
+</p>
+<p>Below is a description of the currently available video filters.
+</p>
+<a name="blackframe"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-blackframe">22.1 blackframe</a></h2>
+
+<p>Detect frames that are (almost) completely black. Can be useful to
+detect chapter transitions or commercials. Output lines consist of
+the frame number of the detected frame, the percentage of blackness,
+the position in the file if known or -1 and the timestamp in seconds.
+</p>
+<p>In order to display the output lines, you need to set the loglevel at
+least to the AV_LOG_INFO value.
+</p>
+<p>The filter accepts the syntax:
+</p><table><tr><td> </td><td><pre class="example">blackframe[=<var>amount</var>:[<var>threshold</var>]]
+</pre></td></tr></table>
+
+<p><var>amount</var> is the percentage of the pixels that have to be below the
+threshold, and defaults to 98.
+</p>
+<p><var>threshold</var> is the threshold below which a pixel value is
+considered black, and defaults to 32.
+</p>
+<a name="boxblur"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-boxblur">22.2 boxblur</a></h2>
+
+<p>Apply boxblur algorithm to the input video.
+</p>
+<p>This filter accepts the parameters:
+<var>luma_power</var>:<var>luma_radius</var>:<var>chroma_radius</var>:<var>chroma_power</var>:<var>alpha_radius</var>:<var>alpha_power</var>
+</p>
+<p>Chroma and alpha parameters are optional, if not specified they default
+to the corresponding values set for <var>luma_radius</var> and
+<var>luma_power</var>.
+</p>
+<p><var>luma_radius</var>, <var>chroma_radius</var>, and <var>alpha_radius</var> represent
+the radius in pixels of the box used for blurring the corresponding
+input plane. They are expressions, and can contain the following
+constants:
+</p><dl compact="compact">
+<dt> ‘<samp>w, h</samp>’</dt>
+<dd><p>the input width and heigth in pixels
+</p>
+</dd>
+<dt> ‘<samp>cw, ch</samp>’</dt>
+<dd><p>the input chroma image width and height in pixels
+</p>
+</dd>
+<dt> ‘<samp>hsub, vsub</samp>’</dt>
+<dd><p>horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" <var>hsub</var> is 2 and <var>vsub</var> is 1.
+</p></dd>
+</dl>
+
+<p>The radius must be a non-negative number, and must be not greater than
+the value of the expression <code>min(w,h)/2</code> for the luma and alpha planes,
+and of <code>min(cw,ch)/2</code> for the chroma planes.
+</p>
+<p><var>luma_power</var>, <var>chroma_power</var>, and <var>alpha_power</var> represent
+how many times the boxblur filter is applied to the corresponding
+plane.
+</p>
+<p>Some examples follow:
+</p>
+<ul>
+<li>
+Apply a boxblur filter with luma, chroma, and alpha radius
+set to 2:
+<table><tr><td> </td><td><pre class="example">boxblur=2:1
+</pre></td></tr></table>
+
+</li><li>
+Set luma radius to 2, alpha and chroma radius to 0
+<table><tr><td> </td><td><pre class="example">boxblur=2:1:0:0:0:0
+</pre></td></tr></table>
+
+</li><li>
+Set luma and chroma radius to a fraction of the video dimension
+<table><tr><td> </td><td><pre class="example">boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
+</pre></td></tr></table>
+
+</li></ul>
+
+<a name="copy"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-copy">22.3 copy</a></h2>
+
+<p>Copy the input source unchanged to the output. Mainly useful for
+testing purposes.
+</p>
+<a name="crop"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-crop">22.4 crop</a></h2>
+
+<p>Crop the input video to <var>out_w</var>:<var>out_h</var>:<var>x</var>:<var>y</var>.
+</p>
+<p>The parameters are expressions containing the following constants:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>E, PI, PHI</samp>’</dt>
+<dd><p>the corresponding mathematical approximated values for e
+(euler number), pi (greek PI), PHI (golden ratio)
+</p>
+</dd>
+<dt> ‘<samp>x, y</samp>’</dt>
+<dd><p>the computed values for <var>x</var> and <var>y</var>. They are evaluated for
+each new frame.
+</p>
+</dd>
+<dt> ‘<samp>in_w, in_h</samp>’</dt>
+<dd><p>the input width and heigth
+</p>
+</dd>
+<dt> ‘<samp>iw, ih</samp>’</dt>
+<dd><p>same as <var>in_w</var> and <var>in_h</var>
+</p>
+</dd>
+<dt> ‘<samp>out_w, out_h</samp>’</dt>
+<dd><p>the output (cropped) width and heigth
+</p>
+</dd>
+<dt> ‘<samp>ow, oh</samp>’</dt>
+<dd><p>same as <var>out_w</var> and <var>out_h</var>
+</p>
+</dd>
+<dt> ‘<samp>n</samp>’</dt>
+<dd><p>the number of input frame, starting from 0
+</p>
+</dd>
+<dt> ‘<samp>pos</samp>’</dt>
+<dd><p>the position in the file of the input frame, NAN if unknown
+</p>
+</dd>
+<dt> ‘<samp>t</samp>’</dt>
+<dd><p>timestamp expressed in seconds, NAN if the input timestamp is unknown
+</p>
+</dd>
+</dl>
+
+<p>The <var>out_w</var> and <var>out_h</var> parameters specify the expressions for
+the width and height of the output (cropped) video. They are
+evaluated just at the configuration of the filter.
+</p>
+<p>The default value of <var>out_w</var> is "in_w", and the default value of
+<var>out_h</var> is "in_h".
+</p>
+<p>The expression for <var>out_w</var> may depend on the value of <var>out_h</var>,
+and the expression for <var>out_h</var> may depend on <var>out_w</var>, but they
+cannot depend on <var>x</var> and <var>y</var>, as <var>x</var> and <var>y</var> are
+evaluated after <var>out_w</var> and <var>out_h</var>.
+</p>
+<p>The <var>x</var> and <var>y</var> parameters specify the expressions for the
+position of the top-left corner of the output (non-cropped) area. They
+are evaluated for each frame. If the evaluated value is not valid, it
+is approximated to the nearest valid value.
+</p>
+<p>The default value of <var>x</var> is "(in_w-out_w)/2", and the default
+value for <var>y</var> is "(in_h-out_h)/2", which set the cropped area at
+the center of the input image.
+</p>
+<p>The expression for <var>x</var> may depend on <var>y</var>, and the expression
+for <var>y</var> may depend on <var>x</var>.
+</p>
+<p>Follow some examples:
+</p><table><tr><td> </td><td><pre class="example"># crop the central input area with size 100x100
+crop=100:100
+
+# crop the central input area with size 2/3 of the input video
+"crop=2/3*in_w:2/3*in_h"
+
+# crop the input video central square
+crop=in_h
+
+# delimit the rectangle with the top-left corner placed at position
+# 100:100 and the right-bottom corner corresponding to the right-bottom
+# corner of the input image.
+crop=in_w-100:in_h-100:100:100
+
+# crop 10 pixels from the left and right borders, and 20 pixels from
+# the top and bottom borders
+"crop=in_w-2*10:in_h-2*20"
+
+# keep only the bottom right quarter of the input image
+"crop=in_w/2:in_h/2:in_w/2:in_h/2"
+
+# crop height for getting Greek harmony
+"crop=in_w:1/PHI*in_w"
+
+# trembling effect
+"crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
+
+# erratic camera effect depending on timestamp
+"crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
+
+# set x depending on the value of y
+"crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
+</pre></td></tr></table>
+
+<a name="cropdetect"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-cropdetect">22.5 cropdetect</a></h2>
+
+<p>Auto-detect crop size.
+</p>
+<p>Calculate necessary cropping parameters and prints the recommended
+parameters through the logging system. The detected dimensions
+correspond to the non-black area of the input video.
+</p>
+<p>It accepts the syntax:
+</p><table><tr><td> </td><td><pre class="example">cropdetect[=<var>limit</var>[:<var>round</var>[:<var>reset</var>]]]
+</pre></td></tr></table>
+
+<dl compact="compact">
+<dt> ‘<samp>limit</samp>’</dt>
+<dd><p>Threshold, which can be optionally specified from nothing (0) to
+everything (255), defaults to 24.
+</p>
+</dd>
+<dt> ‘<samp>round</samp>’</dt>
+<dd><p>Value which the width/height should be divisible by, defaults to
+16. The offset is automatically adjusted to center the video. Use 2 to
+get only even dimensions (needed for 4:2:2 video). 16 is best when
+encoding to most video codecs.
+</p>
+</dd>
+<dt> ‘<samp>reset</samp>’</dt>
+<dd><p>Counter that determines after how many frames cropdetect will reset
+the previously detected largest video area and start over to detect
+the current optimal crop area. Defaults to 0.
+</p>
+<p>This can be useful when channel logos distort the video area. 0
+indicates never reset and return the largest area encountered during
+playback.
+</p></dd>
+</dl>
+
+<a name="drawbox"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-drawbox">22.6 drawbox</a></h2>
+
+<p>Draw a colored box on the input image.
+</p>
+<p>It accepts the syntax:
+</p><table><tr><td> </td><td><pre class="example">drawbox=<var>x</var>:<var>y</var>:<var>width</var>:<var>height</var>:<var>color</var>
+</pre></td></tr></table>
+
+<dl compact="compact">
+<dt> ‘<samp>x, y</samp>’</dt>
+<dd><p>Specify the top left corner coordinates of the box. Default to 0.
+</p>
+</dd>
+<dt> ‘<samp>width, height</samp>’</dt>
+<dd><p>Specify the width and height of the box, if 0 they are interpreted as
+the input width and height. Default to 0.
+</p>
+</dd>
+<dt> ‘<samp>color</samp>’</dt>
+<dd><p>Specify the color of the box to write, it can be the name of a color
+(case insensitive match) or a 0xRRGGBB[AA] sequence.
+</p></dd>
+</dl>
+
+<p>Follow some examples:
+</p><table><tr><td> </td><td><pre class="example"># draw a black box around the edge of the input image
+drawbox
+
+# draw a box with color red and an opacity of 50%
+drawbox=10:20:200:60:red@0.5"
+</pre></td></tr></table>
+
+<a name="drawtext"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-drawtext">22.7 drawtext</a></h2>
+
+<p>Draw text string or text from specified file on top of video using the
+libfreetype library.
+</p>
+<p>To enable compilation of this filter you need to configure FFmpeg with
+<code>--enable-libfreetype</code>.
+</p>
+<p>The filter also recognizes strftime() sequences in the provided text
+and expands them accordingly. Check the documentation of strftime().
+</p>
+<p>The filter accepts parameters as a list of <var>key</var>=<var>value</var> pairs,
+separated by ":".
+</p>
+<p>The description of the accepted parameters follows.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>fontfile</samp>’</dt>
+<dd><p>The font file to be used for drawing text. Path must be included.
+This parameter is mandatory.
+</p>
+</dd>
+<dt> ‘<samp>text</samp>’</dt>
+<dd><p>The text string to be drawn. The text must be a sequence of UTF-8
+encoded characters.
+This parameter is mandatory if no file is specified with the parameter
+<var>textfile</var>.
+</p>
+</dd>
+<dt> ‘<samp>textfile</samp>’</dt>
+<dd><p>A text file containing text to be drawn. The text must be a sequence
+of UTF-8 encoded characters.
+</p>
+<p>This parameter is mandatory if no text string is specified with the
+parameter <var>text</var>.
+</p>
+<p>If both text and textfile are specified, an error is thrown.
+</p>
+</dd>
+<dt> ‘<samp>x, y</samp>’</dt>
+<dd><p>The offsets where text will be drawn within the video frame.
+Relative to the top/left border of the output image.
+</p>
+<p>The default value of <var>x</var> and <var>y</var> is 0.
+</p>
+</dd>
+<dt> ‘<samp>fontsize</samp>’</dt>
+<dd><p>The font size to be used for drawing text.
+The default value of <var>fontsize</var> is 16.
+</p>
+</dd>
+<dt> ‘<samp>fontcolor</samp>’</dt>
+<dd><p>The color to be used for drawing fonts.
+Either a string (e.g. "red") or in 0xRRGGBB[AA] format
+(e.g. "0xff000033"), possibly followed by an alpha specifier.
+The default value of <var>fontcolor</var> is "black".
+</p>
+</dd>
+<dt> ‘<samp>boxcolor</samp>’</dt>
+<dd><p>The color to be used for drawing box around text.
+Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
+(e.g. "0xff00ff"), possibly followed by an alpha specifier.
+The default value of <var>boxcolor</var> is "white".
+</p>
+</dd>
+<dt> ‘<samp>box</samp>’</dt>
+<dd><p>Used to draw a box around text using background color.
+Value should be either 1 (enable) or 0 (disable).
+The default value of <var>box</var> is 0.
+</p>
+</dd>
+<dt> ‘<samp>shadowx, shadowy</samp>’</dt>
+<dd><p>The x and y offsets for the text shadow position with respect to the
+position of the text. They can be either positive or negative
+values. Default value for both is "0".
+</p>
+</dd>
+<dt> ‘<samp>shadowcolor</samp>’</dt>
+<dd><p>The color to be used for drawing a shadow behind the drawn text. It
+can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
+form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
+The default value of <var>shadowcolor</var> is "black".
+</p>
+</dd>
+<dt> ‘<samp>ft_load_flags</samp>’</dt>
+<dd><p>Flags to be used for loading the fonts.
+</p>
+<p>The flags map the corresponding flags supported by libfreetype, and are
+a combination of the following values:
+</p><dl compact="compact">
+<dt> <var>default</var></dt>
+<dt> <var>no_scale</var></dt>
+<dt> <var>no_hinting</var></dt>
+<dt> <var>render</var></dt>
+<dt> <var>no_bitmap</var></dt>
+<dt> <var>vertical_layout</var></dt>
+<dt> <var>force_autohint</var></dt>
+<dt> <var>crop_bitmap</var></dt>
+<dt> <var>pedantic</var></dt>
+<dt> <var>ignore_global_advance_width</var></dt>
+<dt> <var>no_recurse</var></dt>
+<dt> <var>ignore_transform</var></dt>
+<dt> <var>monochrome</var></dt>
+<dt> <var>linear_design</var></dt>
+<dt> <var>no_autohint</var></dt>
+<dt> <var>end table</var></dt>
+</dl>
+
+<p>Default value is "render".
+</p>
+<p>For more information consult the documentation for the FT_LOAD_*
+libfreetype flags.
+</p>
+</dd>
+<dt> ‘<samp>tabsize</samp>’</dt>
+<dd><p>The size in number of spaces to use for rendering the tab.
+Default value is 4.
+</p></dd>
+</dl>
+
+<p>For example the command:
+</p><table><tr><td> </td><td><pre class="example">drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
+</pre></td></tr></table>
+
+<p>will draw "Test Text" with font FreeSerif, using the default values
+for the optional parameters.
+</p>
+<p>The command:
+</p><table><tr><td> </td><td><pre class="example">drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
+ x=100: y=50: fontsize=24: fontcolor=yellow@0.2: box=1: boxcolor=red@0.2"
+</pre></td></tr></table>
+
+<p>will draw ’Test Text’ with font FreeSerif of size 24 at position x=100
+and y=50 (counting from the top-left corner of the screen), text is
+yellow with a red box around it. Both the text and the box have an
+opacity of 20%.
+</p>
+<p>Note that the double quotes are not necessary if spaces are not used
+within the parameter list.
+</p>
+<p>For more information about libfreetype, check:
+<a href="http://www.freetype.org/">http://www.freetype.org/</a>.
+</p>
+<a name="fade"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-fade">22.8 fade</a></h2>
+
+<p>Apply fade-in/out effect to input video.
+</p>
+<p>It accepts the parameters:
+<var>type</var>:<var>start_frame</var>:<var>nb_frames</var>
+</p>
+<p><var>type</var> specifies if the effect type, can be either "in" for
+fade-in, or "out" for a fade-out effect.
+</p>
+<p><var>start_frame</var> specifies the number of the start frame for starting
+to apply the fade effect.
+</p>
+<p><var>nb_frames</var> specifies the number of frames for which the fade
+effect has to last. At the end of the fade-in effect the output video
+will have the same intensity as the input video, at the end of the
+fade-out transition the output video will be completely black.
+</p>
+<p>A few usage examples follow, usable too as test scenarios.
+</p><table><tr><td> </td><td><pre class="example"># fade in first 30 frames of video
+fade=in:0:30
+
+# fade out last 45 frames of a 200-frame video
+fade=out:155:45
+
+# fade in first 25 frames and fade out last 25 frames of a 1000-frame video
+fade=in:0:25, fade=out:975:25
+
+# make first 5 frames black, then fade in from frame 5-24
+fade=in:5:20
+</pre></td></tr></table>
+
+<a name="fieldorder"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-fieldorder">22.9 fieldorder</a></h2>
+
+<p>Transform the field order of the input video.
+</p>
+<p>It accepts one parameter which specifies the required field order that
+the input interlaced video will be transformed to. The parameter can
+assume one of the following values:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0 or bff</samp>’</dt>
+<dd><p>output bottom field first
+</p></dd>
+<dt> ‘<samp>1 or tff</samp>’</dt>
+<dd><p>output top field first
+</p></dd>
+</dl>
+
+<p>Default value is "tff".
+</p>
+<p>Transformation is achieved by shifting the picture content up or down
+by one line, and filling the remaining line with appropriate picture content.
+This method is consistent with most broadcast field order converters.
+</p>
+<p>If the input video is not flagged as being interlaced, or it is already
+flagged as being of the required output field order then this filter does
+not alter the incoming video.
+</p>
+<p>This filter is very useful when converting to or from PAL DV material,
+which is bottom field first.
+</p>
+<p>For example:
+</p><table><tr><td> </td><td><pre class="example">./ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
+</pre></td></tr></table>
+
+<a name="fifo"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-fifo">22.10 fifo</a></h2>
+
+<p>Buffer input images and send them when they are requested.
+</p>
+<p>This filter is mainly useful when auto-inserted by the libavfilter
+framework.
+</p>
+<p>The filter does not take parameters.
+</p>
+<a name="format"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-format">22.11 format</a></h2>
+
+<p>Convert the input video to one of the specified pixel formats.
+Libavfilter will try to pick one that is supported for the input to
+the next filter.
+</p>
+<p>The filter accepts a list of pixel format names, separated by ":",
+for example "yuv420p:monow:rgb24".
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># convert the input video to the format "yuv420p"
+format=yuv420p
+
+# convert the input video to any of the formats in the list
+format=yuv420p:yuv444p:yuv410p
+</pre></td></tr></table>
+
+<p><a name="frei0r"></a>
+</p><a name="frei0r-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-frei0r-1">22.12 frei0r</a></h2>
+
+<p>Apply a frei0r effect to the input video.
+</p>
+<p>To enable compilation of this filter you need to install the frei0r
+header and configure FFmpeg with –enable-frei0r.
+</p>
+<p>The filter supports the syntax:
+</p><table><tr><td> </td><td><pre class="example"><var>filter_name</var>[{:|=}<var>param1</var>:<var>param2</var>:...:<var>paramN</var>]
+</pre></td></tr></table>
+
+<p><var>filter_name</var> is the name to the frei0r effect to load. If the
+environment variable <code>FREI0R_PATH</code> is defined, the frei0r effect
+is searched in each one of the directories specified by the colon
+separated list in <code>FREIOR_PATH</code>, otherwise in the standard frei0r
+paths, which are in this order: ‘<tt>HOME/.frei0r-1/lib/</tt>’,
+‘<tt>/usr/local/lib/frei0r-1/</tt>’, ‘<tt>/usr/lib/frei0r-1/</tt>’.
+</p>
+<p><var>param1</var>, <var>param2</var>, ... , <var>paramN</var> specify the parameters
+for the frei0r effect.
+</p>
+<p>A frei0r effect parameter can be a boolean (whose values are specified
+with "y" and "n"), a double, a color (specified by the syntax
+<var>R</var>/<var>G</var>/<var>B</var>, <var>R</var>, <var>G</var>, and <var>B</var> being float
+numbers from 0.0 to 1.0) or by an <code>av_parse_color()</code> color
+description), a position (specified by the syntax <var>X</var>/<var>Y</var>,
+<var>X</var> and <var>Y</var> being float numbers) and a string.
+</p>
+<p>The number and kind of parameters depend on the loaded effect. If an
+effect parameter is not specified the default value is set.
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># apply the distort0r effect, set the first two double parameters
+frei0r=distort0r:0.5:0.01
+
+# apply the colordistance effect, takes a color as first parameter
+frei0r=colordistance:0.2/0.3/0.4
+frei0r=colordistance:violet
+frei0r=colordistance:0x112233
+
+# apply the perspective effect, specify the top left and top right
+# image positions
+frei0r=perspective:0.2/0.2:0.8/0.2
+</pre></td></tr></table>
+
+<p>For more information see:
+<a href="http://piksel.org/frei0r">http://piksel.org/frei0r</a>
+</p>
+<a name="gradfun"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-gradfun">22.13 gradfun</a></h2>
+
+<p>Fix the banding artifacts that are sometimes introduced into nearly flat
+regions by truncation to 8bit colordepth.
+Interpolate the gradients that should go where the bands are, and
+dither them.
+</p>
+<p>This filter is designed for playback only. Do not use it prior to
+lossy compression, because compression tends to lose the dither and
+bring back the bands.
+</p>
+<p>The filter takes two optional parameters, separated by ’:’:
+<var>strength</var>:<var>radius</var>
+</p>
+<p><var>strength</var> is the maximum amount by which the filter will change
+any one pixel. Also the threshold for detecting nearly flat
+regions. Acceptable values range from .51 to 255, default value is
+1.2, out-of-range values will be clipped to the valid range.
+</p>
+<p><var>radius</var> is the neighborhood to fit the gradient to. A larger
+radius makes for smoother gradients, but also prevents the filter from
+modifying the pixels near detailed regions. Acceptable values are
+8-32, default value is 16, out-of-range values will be clipped to the
+valid range.
+</p>
+<table><tr><td> </td><td><pre class="example"># default parameters
+gradfun=1.2:16
+
+# omitting radius
+gradfun=1.2
+</pre></td></tr></table>
+
+<a name="hflip"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-hflip">22.14 hflip</a></h2>
+
+<p>Flip the input video horizontally.
+</p>
+<p>For example to horizontally flip the video in input with
+‘<tt>ffmpeg</tt>’:
+</p><table><tr><td> </td><td><pre class="example">ffmpeg -i in.avi -vf "hflip" out.avi
+</pre></td></tr></table>
+
+<a name="hqdn3d"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-hqdn3d">22.15 hqdn3d</a></h2>
+
+<p>High precision/quality 3d denoise filter. This filter aims to reduce
+image noise producing smooth images and making still images really
+still. It should enhance compressibility.
+</p>
+<p>It accepts the following optional parameters:
+<var>luma_spatial</var>:<var>chroma_spatial</var>:<var>luma_tmp</var>:<var>chroma_tmp</var>
+</p>
+<dl compact="compact">
+<dt> ‘<samp>luma_spatial</samp>’</dt>
+<dd><p>a non-negative float number which specifies spatial luma strength,
+defaults to 4.0
+</p>
+</dd>
+<dt> ‘<samp>chroma_spatial</samp>’</dt>
+<dd><p>a non-negative float number which specifies spatial chroma strength,
+defaults to 3.0*<var>luma_spatial</var>/4.0
+</p>
+</dd>
+<dt> ‘<samp>luma_tmp</samp>’</dt>
+<dd><p>a float number which specifies luma temporal strength, defaults to
+6.0*<var>luma_spatial</var>/4.0
+</p>
+</dd>
+<dt> ‘<samp>chroma_tmp</samp>’</dt>
+<dd><p>a float number which specifies chroma temporal strength, defaults to
+<var>luma_tmp</var>*<var>chroma_spatial</var>/<var>luma_spatial</var>
+</p></dd>
+</dl>
+
+<a name="lut_002c-lutrgb_002c-lutyuv"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-lut_002c-lutrgb_002c-lutyuv">22.16 lut, lutrgb, lutyuv</a></h2>
+
+<p>Compute a look-up table for binding each pixel component input value
+to an output value, and apply it to input video.
+</p>
+<p><var>lutyuv</var> applies a lookup table to a YUV input video, <var>lutrgb</var>
+to an RGB input video.
+</p>
+<p>These filters accept in input a ":"-separated list of options, which
+specify the expressions used for computing the lookup table for the
+corresponding pixel component values.
+</p>
+<p>The <var>lut</var> filter requires either YUV or RGB pixel formats in
+input, and accepts the options:
+</p><dl compact="compact">
+<dd><p><var>c0</var> (first pixel component)
+<var>c1</var> (second pixel component)
+<var>c2</var> (third pixel component)
+<var>c3</var> (fourth pixel component, corresponds to the alpha component)
+</p></dd>
+</dl>
+
+<p>The exact component associated to each option depends on the format in
+input.
+</p>
+<p>The <var>lutrgb</var> filter requires RGB pixel formats in input, and
+accepts the options:
+</p><dl compact="compact">
+<dd><p><var>r</var> (red component)
+<var>g</var> (green component)
+<var>b</var> (blue component)
+<var>a</var> (alpha component)
+</p></dd>
+</dl>
+
+<p>The <var>lutyuv</var> filter requires YUV pixel formats in input, and
+accepts the options:
+</p><dl compact="compact">
+<dd><p><var>y</var> (Y/luminance component)
+<var>u</var> (U/Cb component)
+<var>v</var> (V/Cr component)
+<var>a</var> (alpha component)
+</p></dd>
+</dl>
+
+<p>The expressions can contain the following constants and functions:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>E, PI, PHI</samp>’</dt>
+<dd><p>the corresponding mathematical approximated values for e
+(euler number), pi (greek PI), PHI (golden ratio)
+</p>
+</dd>
+<dt> ‘<samp>w, h</samp>’</dt>
+<dd><p>the input width and heigth
+</p>
+</dd>
+<dt> ‘<samp>val</samp>’</dt>
+<dd><p>input value for the pixel component
+</p>
+</dd>
+<dt> ‘<samp>clipval</samp>’</dt>
+<dd><p>the input value clipped in the <var>minval</var>-<var>maxval</var> range
+</p>
+</dd>
+<dt> ‘<samp>maxval</samp>’</dt>
+<dd><p>maximum value for the pixel component
+</p>
+</dd>
+<dt> ‘<samp>minval</samp>’</dt>
+<dd><p>minimum value for the pixel component
+</p>
+</dd>
+<dt> ‘<samp>negval</samp>’</dt>
+<dd><p>the negated value for the pixel component value clipped in the
+<var>minval</var>-<var>maxval</var> range , it corresponds to the expression
+"maxval-clipval+minval"
+</p>
+</dd>
+<dt> ‘<samp>clip(val)</samp>’</dt>
+<dd><p>the computed value in <var>val</var> clipped in the
+<var>minval</var>-<var>maxval</var> range
+</p>
+</dd>
+<dt> ‘<samp>gammaval(gamma)</samp>’</dt>
+<dd><p>the computed gamma correction value of the pixel component value
+clipped in the <var>minval</var>-<var>maxval</var> range, corresponds to the
+expression
+"pow((clipval-minval)/(maxval-minval)\,<var>gamma</var>)*(maxval-minval)+minval"
+</p>
+</dd>
+</dl>
+
+<p>All expressions default to "val".
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># negate input video
+lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
+lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
+
+# the above is the same as
+lutrgb="r=negval:g=negval:b=negval"
+lutyuv="y=negval:u=negval:v=negval"
+
+# negate luminance
+lutyuv=negval
+
+# remove chroma components, turns the video into a graytone image
+lutyuv="u=128:v=128"
+
+# apply a luma burning effect
+lutyuv="y=2*val"
+
+# remove green and blue components
+lutrgb="g=0:b=0"
+
+# set a constant alpha channel value on input
+format=rgba,lutrgb=a="maxval-minval/2"
+
+# correct luminance gamma by a 0.5 factor
+lutyuv=y=gammaval(0.5)
+</pre></td></tr></table>
+
+<a name="mp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-mp">22.17 mp</a></h2>
+
+<p>Apply an MPlayer filter to the input video.
+</p>
+<p>This filter provides a wrapper around most of the filters of
+MPlayer/MEncoder.
+</p>
+<p>This wrapper is considered experimental. Some of the wrapped filters
+may not work properly and we may drop support for them, as they will
+be implemented natively into FFmpeg. Thus you should avoid
+depending on them when writing portable scripts.
+</p>
+<p>The filters accepts the parameters:
+<var>filter_name</var>[:=]<var>filter_params</var>
+</p>
+<p><var>filter_name</var> is the name of a supported MPlayer filter,
+<var>filter_params</var> is a string containing the parameters accepted by
+the named filter.
+</p>
+<p>The list of the currently supported filters follows:
+</p><dl compact="compact">
+<dt> <var>2xsai</var></dt>
+<dt> <var>decimate</var></dt>
+<dt> <var>delogo</var></dt>
+<dt> <var>denoise3d</var></dt>
+<dt> <var>detc</var></dt>
+<dt> <var>dint</var></dt>
+<dt> <var>divtc</var></dt>
+<dt> <var>down3dright</var></dt>
+<dt> <var>dsize</var></dt>
+<dt> <var>eq2</var></dt>
+<dt> <var>eq</var></dt>
+<dt> <var>field</var></dt>
+<dt> <var>fil</var></dt>
+<dt> <var>fixpts</var></dt>
+<dt> <var>framestep</var></dt>
+<dt> <var>fspp</var></dt>
+<dt> <var>geq</var></dt>
+<dt> <var>gradfun</var></dt>
+<dt> <var>harddup</var></dt>
+<dt> <var>hqdn3d</var></dt>
+<dt> <var>hue</var></dt>
+<dt> <var>il</var></dt>
+<dt> <var>ilpack</var></dt>
+<dt> <var>ivtc</var></dt>
+<dt> <var>kerndeint</var></dt>
+<dt> <var>mcdeint</var></dt>
+<dt> <var>mirror</var></dt>
+<dt> <var>noise</var></dt>
+<dt> <var>ow</var></dt>
+<dt> <var>palette</var></dt>
+<dt> <var>perspective</var></dt>
+<dt> <var>phase</var></dt>
+<dt> <var>pp7</var></dt>
+<dt> <var>pullup</var></dt>
+<dt> <var>qp</var></dt>
+<dt> <var>rectangle</var></dt>
+<dt> <var>remove-logo</var></dt>
+<dt> <var>rotate</var></dt>
+<dt> <var>sab</var></dt>
+<dt> <var>screenshot</var></dt>
+<dt> <var>smartblur</var></dt>
+<dt> <var>softpulldown</var></dt>
+<dt> <var>softskip</var></dt>
+<dt> <var>spp</var></dt>
+<dt> <var>swapuv</var></dt>
+<dt> <var>telecine</var></dt>
+<dt> <var>test</var></dt>
+<dt> <var>tile</var></dt>
+<dt> <var>tinterlace</var></dt>
+<dt> <var>unsharp</var></dt>
+<dt> <var>uspp</var></dt>
+<dt> <var>yuvcsp</var></dt>
+<dt> <var>yvu9</var></dt>
+</dl>
+
+<p>The parameter syntax and behavior for the listed filters are the same
+of the corresponding MPlayer filters. For detailed instructions check
+the "VIDEO FILTERS" section in the MPlayer manual.
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># remove a logo by interpolating the surrounding pixels
+mp=delogo=200:200:80:20:1
+
+# adjust gamma, brightness, contrast
+mp=eq2=1.0:2:0.5
+
+# tweak hue and saturation
+mp=hue=100:-10
+</pre></td></tr></table>
+
+<p>See also mplayer(1), <a href="http://www.mplayerhq.hu/">http://www.mplayerhq.hu/</a>.
+</p>
+<a name="negate"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-negate">22.18 negate</a></h2>
+
+<p>Negate input video.
+</p>
+<p>This filter accepts an integer in input, if non-zero it negates the
+alpha component (if available). The default value in input is 0.
+</p>
+<a name="noformat"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-noformat">22.19 noformat</a></h2>
+
+<p>Force libavfilter not to use any of the specified pixel formats for the
+input to the next filter.
+</p>
+<p>The filter accepts a list of pixel format names, separated by ":",
+for example "yuv420p:monow:rgb24".
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># force libavfilter to use a format different from "yuv420p" for the
+# input to the vflip filter
+noformat=yuv420p,vflip
+
+# convert the input video to any of the formats not contained in the list
+noformat=yuv420p:yuv444p:yuv410p
+</pre></td></tr></table>
+
+<a name="null-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-null-1">22.20 null</a></h2>
+
+<p>Pass the video source unchanged to the output.
+</p>
+<a name="ocv"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-ocv">22.21 ocv</a></h2>
+
+<p>Apply video transform using libopencv.
+</p>
+<p>To enable this filter install libopencv library and headers and
+configure FFmpeg with –enable-libopencv.
+</p>
+<p>The filter takes the parameters: <var>filter_name</var>{:=}<var>filter_params</var>.
+</p>
+<p><var>filter_name</var> is the name of the libopencv filter to apply.
+</p>
+<p><var>filter_params</var> specifies the parameters to pass to the libopencv
+filter. If not specified the default values are assumed.
+</p>
+<p>Refer to the official libopencv documentation for more precise
+informations:
+<a href="http://opencv.willowgarage.com/documentation/c/image_filtering.html">http://opencv.willowgarage.com/documentation/c/image_filtering.html</a>
+</p>
+<p>Follows the list of supported libopencv filters.
+</p>
+<p><a name="dilate"></a>
+</p><a name="dilate-1"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-dilate-1">22.21.1 dilate</a></h3>
+
+<p>Dilate an image by using a specific structuring element.
+This filter corresponds to the libopencv function <code>cvDilate</code>.
+</p>
+<p>It accepts the parameters: <var>struct_el</var>:<var>nb_iterations</var>.
+</p>
+<p><var>struct_el</var> represents a structuring element, and has the syntax:
+<var>cols</var>x<var>rows</var>+<var>anchor_x</var>x<var>anchor_y</var>/<var>shape</var>
+</p>
+<p><var>cols</var> and <var>rows</var> represent the number of colums and rows of
+the structuring element, <var>anchor_x</var> and <var>anchor_y</var> the anchor
+point, and <var>shape</var> the shape for the structuring element, and
+can be one of the values "rect", "cross", "ellipse", "custom".
+</p>
+<p>If the value for <var>shape</var> is "custom", it must be followed by a
+string of the form "=<var>filename</var>". The file with name
+<var>filename</var> is assumed to represent a binary image, with each
+printable character corresponding to a bright pixel. When a custom
+<var>shape</var> is used, <var>cols</var> and <var>rows</var> are ignored, the number
+or columns and rows of the read file are assumed instead.
+</p>
+<p>The default value for <var>struct_el</var> is "3x3+0x0/rect".
+</p>
+<p><var>nb_iterations</var> specifies the number of times the transform is
+applied to the image, and defaults to 1.
+</p>
+<p>Follow some example:
+</p><table><tr><td> </td><td><pre class="example"># use the default values
+ocv=dilate
+
+# dilate using a structuring element with a 5x5 cross, iterate two times
+ocv=dilate=5x5+2x2/cross:2
+
+# read the shape from the file diamond.shape, iterate two times
+# the file diamond.shape may contain a pattern of characters like this:
+# *
+# ***
+# *****
+# ***
+# *
+# the specified cols and rows are ignored (but not the anchor point coordinates)
+ocv=0x0+2x2/custom=diamond.shape:2
+</pre></td></tr></table>
+
+<a name="erode"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-erode">22.21.2 erode</a></h3>
+
+<p>Erode an image by using a specific structuring element.
+This filter corresponds to the libopencv function <code>cvErode</code>.
+</p>
+<p>The filter accepts the parameters: <var>struct_el</var>:<var>nb_iterations</var>,
+with the same syntax and semantics as the <a href="#dilate">dilate</a> filter.
+</p>
+<a name="smooth"></a>
+<h3 class="subsection"><a href="ffmpeg.html#toc-smooth">22.21.3 smooth</a></h3>
+
+<p>Smooth the input video.
+</p>
+<p>The filter takes the following parameters:
+<var>type</var>:<var>param1</var>:<var>param2</var>:<var>param3</var>:<var>param4</var>.
+</p>
+<p><var>type</var> is the type of smooth filter to apply, and can be one of
+the following values: "blur", "blur_no_scale", "median", "gaussian",
+"bilateral". The default value is "gaussian".
+</p>
+<p><var>param1</var>, <var>param2</var>, <var>param3</var>, and <var>param4</var> are
+parameters whose meanings depend on smooth type. <var>param1</var> and
+<var>param2</var> accept integer positive values or 0, <var>param3</var> and
+<var>param4</var> accept float values.
+</p>
+<p>The default value for <var>param1</var> is 3, the default value for the
+other parameters is 0.
+</p>
+<p>These parameters correspond to the parameters assigned to the
+libopencv function <code>cvSmooth</code>.
+</p>
+<a name="overlay"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-overlay">22.22 overlay</a></h2>
+
+<p>Overlay one video on top of another.
+</p>
+<p>It takes two inputs and one output, the first input is the "main"
+video on which the second input is overlayed.
+</p>
+<p>It accepts the parameters: <var>x</var>:<var>y</var>.
+</p>
+<p><var>x</var> is the x coordinate of the overlayed video on the main video,
+<var>y</var> is the y coordinate. The parameters are expressions containing
+the following parameters:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>main_w, main_h</samp>’</dt>
+<dd><p>main input width and height
+</p>
+</dd>
+<dt> ‘<samp>W, H</samp>’</dt>
+<dd><p>same as <var>main_w</var> and <var>main_h</var>
+</p>
+</dd>
+<dt> ‘<samp>overlay_w, overlay_h</samp>’</dt>
+<dd><p>overlay input width and height
+</p>
+</dd>
+<dt> ‘<samp>w, h</samp>’</dt>
+<dd><p>same as <var>overlay_w</var> and <var>overlay_h</var>
+</p></dd>
+</dl>
+
+<p>Be aware that frames are taken from each input video in timestamp
+order, hence, if their initial timestamps differ, it is a a good idea
+to pass the two inputs through a <var>setpts=PTS-STARTPTS</var> filter to
+have them begin in the same zero timestamp, as it does the example for
+the <var>movie</var> filter.
+</p>
+<p>Follow some examples:
+</p><table><tr><td> </td><td><pre class="example"># draw the overlay at 10 pixels from the bottom right
+# corner of the main video.
+overlay=main_w-overlay_w-10:main_h-overlay_h-10
+
+# insert a transparent PNG logo in the bottom left corner of the input
+movie=logo.png [logo];
+[in][logo] overlay=10:main_h-overlay_h-10 [out]
+
+# insert 2 different transparent PNG logos (second logo on bottom
+# right corner):
+movie=logo1.png [logo1];
+movie=logo2.png [logo2];
+[in][logo1] overlay=10:H-h-10 [in+logo1];
+[in+logo1][logo2] overlay=W-w-10:H-h-10 [out]
+
+# add a transparent color layer on top of the main video,
+# WxH specifies the size of the main input to the overlay filter
+color=red.3:WxH [over]; [in][over] overlay [out]
+</pre></td></tr></table>
+
+<p>You can chain togheter more overlays but the efficiency of such
+approach is yet to be tested.
+</p>
+<a name="pad"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-pad">22.23 pad</a></h2>
+
+<p>Add paddings to the input image, and places the original input at the
+given coordinates <var>x</var>, <var>y</var>.
+</p>
+<p>It accepts the following parameters:
+<var>width</var>:<var>height</var>:<var>x</var>:<var>y</var>:<var>color</var>.
+</p>
+<p>The parameters <var>width</var>, <var>height</var>, <var>x</var>, and <var>y</var> are
+expressions containing the following constants:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>E, PI, PHI</samp>’</dt>
+<dd><p>the corresponding mathematical approximated values for e
+(euler number), pi (greek PI), phi (golden ratio)
+</p>
+</dd>
+<dt> ‘<samp>in_w, in_h</samp>’</dt>
+<dd><p>the input video width and heigth
+</p>
+</dd>
+<dt> ‘<samp>iw, ih</samp>’</dt>
+<dd><p>same as <var>in_w</var> and <var>in_h</var>
+</p>
+</dd>
+<dt> ‘<samp>out_w, out_h</samp>’</dt>
+<dd><p>the output width and heigth, that is the size of the padded area as
+specified by the <var>width</var> and <var>height</var> expressions
+</p>
+</dd>
+<dt> ‘<samp>ow, oh</samp>’</dt>
+<dd><p>same as <var>out_w</var> and <var>out_h</var>
+</p>
+</dd>
+<dt> ‘<samp>x, y</samp>’</dt>
+<dd><p>x and y offsets as specified by the <var>x</var> and <var>y</var>
+expressions, or NAN if not yet specified
+</p>
+</dd>
+<dt> ‘<samp>dar, a</samp>’</dt>
+<dd><p>input display aspect ratio, same as <var>iw</var> / <var>ih</var>
+</p>
+</dd>
+<dt> ‘<samp>sar</samp>’</dt>
+<dd><p>input sample aspect ratio
+</p>
+</dd>
+<dt> ‘<samp>hsub, vsub</samp>’</dt>
+<dd><p>horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" <var>hsub</var> is 2 and <var>vsub</var> is 1.
+</p></dd>
+</dl>
+
+<p>Follows the description of the accepted parameters.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>width, height</samp>’</dt>
+<dd>
+<p>Specify the size of the output image with the paddings added. If the
+value for <var>width</var> or <var>height</var> is 0, the corresponding input size
+is used for the output.
+</p>
+<p>The <var>width</var> expression can reference the value set by the
+<var>height</var> expression, and viceversa.
+</p>
+<p>The default value of <var>width</var> and <var>height</var> is 0.
+</p>
+</dd>
+<dt> ‘<samp>x, y</samp>’</dt>
+<dd>
+<p>Specify the offsets where to place the input image in the padded area
+with respect to the top/left border of the output image.
+</p>
+<p>The <var>x</var> expression can reference the value set by the <var>y</var>
+expression, and viceversa.
+</p>
+<p>The default value of <var>x</var> and <var>y</var> is 0.
+</p>
+</dd>
+<dt> ‘<samp>color</samp>’</dt>
+<dd>
+<p>Specify the color of the padded area, it can be the name of a color
+(case insensitive match) or a 0xRRGGBB[AA] sequence.
+</p>
+<p>The default value of <var>color</var> is "black".
+</p>
+</dd>
+</dl>
+
+<p>Some examples follow:
+</p>
+<table><tr><td> </td><td><pre class="example"># Add paddings with color "violet" to the input video. Output video
+# size is 640x480, the top-left corner of the input video is placed at
+# column 0, row 40.
+pad=640:480:0:40:violet
+
+# pad the input to get an output with dimensions increased bt 3/2,
+# and put the input video at the center of the padded area
+pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
+
+# pad the input to get a squared output with size equal to the maximum
+# value between the input width and height, and put the input video at
+# the center of the padded area
+pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
+
+# pad the input to get a final w/h ratio of 16:9
+pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
+
+# double output size and put the input video in the bottom-right
+# corner of the output padded area
+pad="2*iw:2*ih:ow-iw:oh-ih"
+</pre></td></tr></table>
+
+<a name="pixdesctest"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-pixdesctest">22.24 pixdesctest</a></h2>
+
+<p>Pixel format descriptor test filter, mainly useful for internal
+testing. The output video should be equal to the input video.
+</p>
+<p>For example:
+</p><table><tr><td> </td><td><pre class="example">format=monow, pixdesctest
+</pre></td></tr></table>
+
+<p>can be used to test the monowhite pixel format descriptor definition.
+</p>
+<a name="scale"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-scale">22.25 scale</a></h2>
+
+<p>Scale the input video to <var>width</var>:<var>height</var> and/or convert the image format.
+</p>
+<p>The parameters <var>width</var> and <var>height</var> are expressions containing
+the following constants:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>E, PI, PHI</samp>’</dt>
+<dd><p>the corresponding mathematical approximated values for e
+(euler number), pi (greek PI), phi (golden ratio)
+</p>
+</dd>
+<dt> ‘<samp>in_w, in_h</samp>’</dt>
+<dd><p>the input width and heigth
+</p>
+</dd>
+<dt> ‘<samp>iw, ih</samp>’</dt>
+<dd><p>same as <var>in_w</var> and <var>in_h</var>
+</p>
+</dd>
+<dt> ‘<samp>out_w, out_h</samp>’</dt>
+<dd><p>the output (cropped) width and heigth
+</p>
+</dd>
+<dt> ‘<samp>ow, oh</samp>’</dt>
+<dd><p>same as <var>out_w</var> and <var>out_h</var>
+</p>
+</dd>
+<dt> ‘<samp>dar, a</samp>’</dt>
+<dd><p>input display aspect ratio, same as <var>iw</var> / <var>ih</var>
+</p>
+</dd>
+<dt> ‘<samp>sar</samp>’</dt>
+<dd><p>input sample aspect ratio
+</p>
+</dd>
+<dt> ‘<samp>hsub, vsub</samp>’</dt>
+<dd><p>horizontal and vertical chroma subsample values. For example for the
+pixel format "yuv422p" <var>hsub</var> is 2 and <var>vsub</var> is 1.
+</p></dd>
+</dl>
+
+<p>If the input image format is different from the format requested by
+the next filter, the scale filter will convert the input to the
+requested format.
+</p>
+<p>If the value for <var>width</var> or <var>height</var> is 0, the respective input
+size is used for the output.
+</p>
+<p>If the value for <var>width</var> or <var>height</var> is -1, the scale filter will
+use, for the respective output size, a value that maintains the aspect
+ratio of the input image.
+</p>
+<p>The default value of <var>width</var> and <var>height</var> is 0.
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># scale the input video to a size of 200x100.
+scale=200:100
+
+# scale the input to 2x
+scale=2*iw:2*ih
+# the above is the same as
+scale=2*in_w:2*in_h
+
+# scale the input to half size
+scale=iw/2:ih/2
+
+# increase the width, and set the height to the same size
+scale=3/2*iw:ow
+
+# seek for Greek harmony
+scale=iw:1/PHI*iw
+scale=ih*PHI:ih
+
+# increase the height, and set the width to 3/2 of the height
+scale=3/2*oh:3/5*ih
+
+# increase the size, but make the size a multiple of the chroma
+scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
+
+# increase the width to a maximum of 500 pixels, keep the same input aspect ratio
+scale='min(500\, iw*3/2):-1'
+</pre></td></tr></table>
+
+<a name="select"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-select">22.26 select</a></h2>
+<p>Select frames to pass in output.
+</p>
+<p>It accepts in input an expression, which is evaluated for each input
+frame. If the expression is evaluated to a non-zero value, the frame
+is selected and passed to the output, otherwise it is discarded.
+</p>
+<p>The expression can contain the following constants:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>PI</samp>’</dt>
+<dd><p>Greek PI
+</p>
+</dd>
+<dt> ‘<samp>PHI</samp>’</dt>
+<dd><p>golden ratio
+</p>
+</dd>
+<dt> ‘<samp>E</samp>’</dt>
+<dd><p>Euler number
+</p>
+</dd>
+<dt> ‘<samp>n</samp>’</dt>
+<dd><p>the sequential number of the filtered frame, starting from 0
+</p>
+</dd>
+<dt> ‘<samp>selected_n</samp>’</dt>
+<dd><p>the sequential number of the selected frame, starting from 0
+</p>
+</dd>
+<dt> ‘<samp>prev_selected_n</samp>’</dt>
+<dd><p>the sequential number of the last selected frame, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>TB</samp>’</dt>
+<dd><p>timebase of the input timestamps
+</p>
+</dd>
+<dt> ‘<samp>pts</samp>’</dt>
+<dd><p>the PTS (Presentation TimeStamp) of the filtered video frame,
+expressed in <var>TB</var> units, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>t</samp>’</dt>
+<dd><p>the PTS (Presentation TimeStamp) of the filtered video frame,
+expressed in seconds, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>prev_pts</samp>’</dt>
+<dd><p>the PTS of the previously filtered video frame, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>prev_selected_pts</samp>’</dt>
+<dd><p>the PTS of the last previously filtered video frame, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>prev_selected_t</samp>’</dt>
+<dd><p>the PTS of the last previously selected video frame, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>start_pts</samp>’</dt>
+<dd><p>the PTS of the first video frame in the video, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>start_t</samp>’</dt>
+<dd><p>the time of the first video frame in the video, NAN if undefined
+</p>
+</dd>
+<dt> ‘<samp>pict_type</samp>’</dt>
+<dd><p>the picture type of the filtered frame, can assume one of the following
+values:
+</p><dl compact="compact">
+<dt> ‘<samp>PICT_TYPE_I</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_P</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_B</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_S</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_SI</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_SP</samp>’</dt>
+<dt> ‘<samp>PICT_TYPE_BI</samp>’</dt>
+</dl>
+
+</dd>
+<dt> ‘<samp>interlace_type</samp>’</dt>
+<dd><p>the frame interlace type, can assume one of the following values:
+</p><dl compact="compact">
+<dt> ‘<samp>INTERLACE_TYPE_P</samp>’</dt>
+<dd><p>the frame is progressive (not interlaced)
+</p></dd>
+<dt> ‘<samp>INTERLACE_TYPE_T</samp>’</dt>
+<dd><p>the frame is top-field-first
+</p></dd>
+<dt> ‘<samp>INTERLACE_TYPE_B</samp>’</dt>
+<dd><p>the frame is bottom-field-first
+</p></dd>
+</dl>
+
+</dd>
+<dt> ‘<samp>key</samp>’</dt>
+<dd><p>1 if the filtered frame is a key-frame, 0 otherwise
+</p>
+</dd>
+<dt> ‘<samp>pos</samp>’</dt>
+<dd><p>the position in the file of the filtered frame, -1 if the information
+is not available (e.g. for synthetic video)
+</p></dd>
+</dl>
+
+<p>The default value of the select expression is "1".
+</p>
+<p>Some examples follow:
+</p>
+<table><tr><td> </td><td><pre class="example"># select all frames in input
+select
+
+# the above is the same as:
+select=1
+
+# skip all frames:
+select=0
+
+# select only I-frames
+select='eq(pict_type\,PICT_TYPE_I)'
+
+# select one frame every 100
+select='not(mod(n\,100))'
+
+# select only frames contained in the 10-20 time interval
+select='gte(t\,10)*lte(t\,20)'
+
+# select only I frames contained in the 10-20 time interval
+select='gte(t\,10)*lte(t\,20)*eq(pict_type\,PICT_TYPE_I)'
+
+# select frames with a minimum distance of 10 seconds
+select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
+</pre></td></tr></table>
+
+<p><a name="setdar"></a>
+</p><a name="setdar-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-setdar-1">22.27 setdar</a></h2>
+
+<p>Set the Display Aspect Ratio for the filter output video.
+</p>
+<p>This is done by changing the specified Sample (aka Pixel) Aspect
+Ratio, according to the following equation:
+<em>DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR</em>
+</p>
+<p>Keep in mind that this filter does not modify the pixel dimensions of
+the video frame. Also the display aspect ratio set by this filter may
+be changed by later filters in the filterchain, e.g. in case of
+scaling or if another "setdar" or a "setsar" filter is applied.
+</p>
+<p>The filter accepts a parameter string which represents the wanted
+display aspect ratio.
+The parameter can be a floating point number string, or an expression
+of the form <var>num</var>:<var>den</var>, where <var>num</var> and <var>den</var> are the
+numerator and denominator of the aspect ratio.
+If the parameter is not specified, it is assumed the value "0:1".
+</p>
+<p>For example to change the display aspect ratio to 16:9, specify:
+</p><table><tr><td> </td><td><pre class="example">setdar=16:9
+# the above is equivalent to
+setdar=1.77777
+</pre></td></tr></table>
+
+<p>See also the <a href="#setsar">setsar</a> filter documentation.
+</p>
+<a name="setpts"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-setpts">22.28 setpts</a></h2>
+
+<p>Change the PTS (presentation timestamp) of the input video frames.
+</p>
+<p>Accept in input an expression evaluated through the eval API, which
+can contain the following constants:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>PTS</samp>’</dt>
+<dd><p>the presentation timestamp in input
+</p>
+</dd>
+<dt> ‘<samp>PI</samp>’</dt>
+<dd><p>Greek PI
+</p>
+</dd>
+<dt> ‘<samp>PHI</samp>’</dt>
+<dd><p>golden ratio
+</p>
+</dd>
+<dt> ‘<samp>E</samp>’</dt>
+<dd><p>Euler number
+</p>
+</dd>
+<dt> ‘<samp>N</samp>’</dt>
+<dd><p>the count of the input frame, starting from 0.
+</p>
+</dd>
+<dt> ‘<samp>STARTPTS</samp>’</dt>
+<dd><p>the PTS of the first video frame
+</p>
+</dd>
+<dt> ‘<samp>INTERLACED</samp>’</dt>
+<dd><p>tell if the current frame is interlaced
+</p>
+</dd>
+<dt> ‘<samp>POS</samp>’</dt>
+<dd><p>original position in the file of the frame, or undefined if undefined
+for the current frame
+</p>
+</dd>
+<dt> ‘<samp>PREV_INPTS</samp>’</dt>
+<dd><p>previous input PTS
+</p>
+</dd>
+<dt> ‘<samp>PREV_OUTPTS</samp>’</dt>
+<dd><p>previous output PTS
+</p>
+</dd>
+</dl>
+
+<p>Some examples follow:
+</p>
+<table><tr><td> </td><td><pre class="example"># start counting PTS from zero
+setpts=PTS-STARTPTS
+
+# fast motion
+setpts=0.5*PTS
+
+# slow motion
+setpts=2.0*PTS
+
+# fixed rate 25 fps
+setpts=N/(25*TB)
+
+# fixed rate 25 fps with some jitter
+setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
+</pre></td></tr></table>
+
+<p><a name="setsar"></a>
+</p><a name="setsar-1"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-setsar-1">22.29 setsar</a></h2>
+
+<p>Set the Sample (aka Pixel) Aspect Ratio for the filter output video.
+</p>
+<p>Note that as a consequence of the application of this filter, the
+output display aspect ratio will change according to the following
+equation:
+<em>DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR</em>
+</p>
+<p>Keep in mind that the sample aspect ratio set by this filter may be
+changed by later filters in the filterchain, e.g. if another "setsar"
+or a "setdar" filter is applied.
+</p>
+<p>The filter accepts a parameter string which represents the wanted
+sample aspect ratio.
+The parameter can be a floating point number string, or an expression
+of the form <var>num</var>:<var>den</var>, where <var>num</var> and <var>den</var> are the
+numerator and denominator of the aspect ratio.
+If the parameter is not specified, it is assumed the value "0:1".
+</p>
+<p>For example to change the sample aspect ratio to 10:11, specify:
+</p><table><tr><td> </td><td><pre class="example">setsar=10:11
+</pre></td></tr></table>
+
+<a name="settb"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-settb">22.30 settb</a></h2>
+
+<p>Set the timebase to use for the output frames timestamps.
+It is mainly useful for testing timebase configuration.
+</p>
+<p>It accepts in input an arithmetic expression representing a rational.
+The expression can contain the constants "PI", "E", "PHI", "AVTB" (the
+default timebase), and "intb" (the input timebase).
+</p>
+<p>The default value for the input is "intb".
+</p>
+<p>Follow some examples.
+</p>
+<table><tr><td> </td><td><pre class="example"># set the timebase to 1/25
+settb=1/25
+
+# set the timebase to 1/10
+settb=0.1
+
+#set the timebase to 1001/1000
+settb=1+0.001
+
+#set the timebase to 2*intb
+settb=2*intb
+
+#set the default timebase value
+settb=AVTB
+</pre></td></tr></table>
+
+<a name="showinfo"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-showinfo">22.31 showinfo</a></h2>
+
+<p>Show a line containing various information for each input video frame.
+The input video is not modified.
+</p>
+<p>The shown line contains a sequence of key/value pairs of the form
+<var>key</var>:<var>value</var>.
+</p>
+<p>A description of each shown parameter follows:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>n</samp>’</dt>
+<dd><p>sequential number of the input frame, starting from 0
+</p>
+</dd>
+<dt> ‘<samp>pts</samp>’</dt>
+<dd><p>Presentation TimeStamp of the input frame, expressed as a number of
+time base units. The time base unit depends on the filter input pad.
+</p>
+</dd>
+<dt> ‘<samp>pts_time</samp>’</dt>
+<dd><p>Presentation TimeStamp of the input frame, expressed as a number of
+seconds
+</p>
+</dd>
+<dt> ‘<samp>pos</samp>’</dt>
+<dd><p>position of the frame in the input stream, -1 if this information in
+unavailable and/or meanigless (for example in case of synthetic video)
+</p>
+</dd>
+<dt> ‘<samp>fmt</samp>’</dt>
+<dd><p>pixel format name
+</p>
+</dd>
+<dt> ‘<samp>sar</samp>’</dt>
+<dd><p>sample aspect ratio of the input frame, expressed in the form
+<var>num</var>/<var>den</var>
+</p>
+</dd>
+<dt> ‘<samp>s</samp>’</dt>
+<dd><p>size of the input frame, expressed in the form
+<var>width</var>x<var>height</var>
+</p>
+</dd>
+<dt> ‘<samp>i</samp>’</dt>
+<dd><p>interlaced mode ("P" for "progressive", "T" for top field first, "B"
+for bottom field first)
+</p>
+</dd>
+<dt> ‘<samp>iskey</samp>’</dt>
+<dd><p>1 if the frame is a key frame, 0 otherwise
+</p>
+</dd>
+<dt> ‘<samp>type</samp>’</dt>
+<dd><p>picture type of the input frame ("I" for an I-frame, "P" for a
+P-frame, "B" for a B-frame, "?" for unknown type).
+Check also the documentation of the <code>AVPictureType</code> enum and of
+the <code>av_get_picture_type_char</code> function defined in
+‘<tt>libavutil/avutil.h</tt>’.
+</p>
+</dd>
+<dt> ‘<samp>checksum</samp>’</dt>
+<dd><p>Adler-32 checksum of all the planes of the input frame
+</p>
+</dd>
+<dt> ‘<samp>plane_checksum</samp>’</dt>
+<dd><p>Adler-32 checksum of each plane of the input frame, expressed in the form
+"[<var>c0</var> <var>c1</var> <var>c2</var> <var>c3</var>]"
+</p></dd>
+</dl>
+
+<a name="slicify"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-slicify">22.32 slicify</a></h2>
+
+<p>Pass the images of input video on to next video filter as multiple
+slices.
+</p>
+<table><tr><td> </td><td><pre class="example">./ffmpeg -i in.avi -vf "slicify=32" out.avi
+</pre></td></tr></table>
+
+<p>The filter accepts the slice height as parameter. If the parameter is
+not specified it will use the default value of 16.
+</p>
+<p>Adding this in the beginning of filter chains should make filtering
+faster due to better use of the memory cache.
+</p>
+<a name="split"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-split">22.33 split</a></h2>
+
+<p>Pass on the input video to two outputs. Both outputs are identical to
+the input video.
+</p>
+<p>For example:
+</p><table><tr><td> </td><td><pre class="example">[in] split [splitout1][splitout2];
+[splitout1] crop=100:100:0:0 [cropout];
+[splitout2] pad=200:200:100:100 [padout];
+</pre></td></tr></table>
+
+<p>will create two separate outputs from the same input, one cropped and
+one padded.
+</p>
+<a name="transpose"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-transpose">22.34 transpose</a></h2>
+
+<p>Transpose rows with columns in the input video and optionally flip it.
+</p>
+<p>It accepts a parameter representing an integer, which can assume the
+values:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
+</p><table><tr><td> </td><td><pre class="example">L.R L.l
+. . -> . .
+l.r R.r
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>Rotate by 90 degrees clockwise, that is:
+</p><table><tr><td> </td><td><pre class="example">L.R l.L
+. . -> . .
+l.r r.R
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>Rotate by 90 degrees counterclockwise, that is:
+</p><table><tr><td> </td><td><pre class="example">L.R R.r
+. . -> . .
+l.r L.l
+</pre></td></tr></table>
+
+</dd>
+<dt> ‘<samp>3</samp>’</dt>
+<dd><p>Rotate by 90 degrees clockwise and vertically flip, that is:
+</p><table><tr><td> </td><td><pre class="example">L.R r.R
+. . -> . .
+l.r l.L
+</pre></td></tr></table>
+</dd>
+</dl>
+
+<a name="unsharp"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-unsharp">22.35 unsharp</a></h2>
+
+<p>Sharpen or blur the input video.
+</p>
+<p>It accepts the following parameters:
+<var>luma_msize_x</var>:<var>luma_msize_y</var>:<var>luma_amount</var>:<var>chroma_msize_x</var>:<var>chroma_msize_y</var>:<var>chroma_amount</var>
+</p>
+<p>Negative values for the amount will blur the input video, while positive
+values will sharpen. All parameters are optional and default to the
+equivalent of the string ’5:5:1.0:0:0:0.0’.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>luma_msize_x</samp>’</dt>
+<dd><p>Set the luma matrix horizontal size. It can be an integer between 3
+and 13, default value is 5.
+</p>
+</dd>
+<dt> ‘<samp>luma_msize_y</samp>’</dt>
+<dd><p>Set the luma matrix vertical size. It can be an integer between 3
+and 13, default value is 5.
+</p>
+</dd>
+<dt> ‘<samp>luma_amount</samp>’</dt>
+<dd><p>Set the luma effect strength. It can be a float number between -2.0
+and 5.0, default value is 1.0.
+</p>
+</dd>
+<dt> ‘<samp>chroma_msize_x</samp>’</dt>
+<dd><p>Set the chroma matrix horizontal size. It can be an integer between 3
+and 13, default value is 0.
+</p>
+</dd>
+<dt> ‘<samp>chroma_msize_y</samp>’</dt>
+<dd><p>Set the chroma matrix vertical size. It can be an integer between 3
+and 13, default value is 0.
+</p>
+</dd>
+<dt> ‘<samp>luma_amount</samp>’</dt>
+<dd><p>Set the chroma effect strength. It can be a float number between -2.0
+and 5.0, default value is 0.0.
+</p>
+</dd>
+</dl>
+
+<table><tr><td> </td><td><pre class="example"># Strong luma sharpen effect parameters
+unsharp=7:7:2.5
+
+# Strong blur of both luma and chroma parameters
+unsharp=7:7:-2:7:7:-2
+
+# Use the default values with <code>ffmpeg</code>
+./ffmpeg -i in.avi -vf "unsharp" out.mp4
+</pre></td></tr></table>
+
+<a name="vflip"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-vflip">22.36 vflip</a></h2>
+
+<p>Flip the input video vertically.
+</p>
+<table><tr><td> </td><td><pre class="example">./ffmpeg -i in.avi -vf "vflip" out.avi
+</pre></td></tr></table>
+
+<a name="yadif"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-yadif">22.37 yadif</a></h2>
+
+<p>Deinterlace the input video ("yadif" means "yet another deinterlacing
+filter").
+</p>
+<p>It accepts the optional parameters: <var>mode</var>:<var>parity</var>:<var>auto</var>.
+</p>
+<p><var>mode</var> specifies the interlacing mode to adopt, accepts one of the
+following values:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>output 1 frame for each frame
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>output 1 frame for each field
+</p></dd>
+<dt> ‘<samp>2</samp>’</dt>
+<dd><p>like 0 but skips spatial interlacing check
+</p></dd>
+<dt> ‘<samp>3</samp>’</dt>
+<dd><p>like 1 but skips spatial interlacing check
+</p></dd>
+</dl>
+
+<p>Default value is 0.
+</p>
+<p><var>parity</var> specifies the picture field parity assumed for the input
+interlaced video, accepts one of the following values:
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>assume bottom field first
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>assume top field first
+</p></dd>
+<dt> ‘<samp>-1</samp>’</dt>
+<dd><p>enable automatic detection
+</p></dd>
+</dl>
+
+<p>Default value is -1.
+If interlacing is unknown or decoder does not export this information,
+top field first will be assumed.
+</p>
+<p><var>auto</var> specifies if deinterlacer should trust the interlaced flag
+and only deinterlace frames marked as interlaced
+</p>
+<dl compact="compact">
+<dt> ‘<samp>0</samp>’</dt>
+<dd><p>deinterlace all frames
+</p></dd>
+<dt> ‘<samp>1</samp>’</dt>
+<dd><p>only deinterlace frames marked as interlaced
+</p></dd>
+</dl>
+
+<p>Default value is 0.
+</p>
+
+<a name="Video-Sources"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Video-Sources">23. Video Sources</a></h1>
+
+<p>Below is a description of the currently available video sources.
+</p>
+<a name="buffer"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-buffer">23.1 buffer</a></h2>
+
+<p>Buffer video frames, and make them available to the filter chain.
+</p>
+<p>This source is mainly intended for a programmatic use, in particular
+through the interface defined in ‘<tt>libavfilter/vsrc_buffer.h</tt>’.
+</p>
+<p>It accepts the following parameters:
+<var>width</var>:<var>height</var>:<var>pix_fmt_string</var>:<var>timebase_num</var>:<var>timebase_den</var>:<var>sample_aspect_ratio_num</var>:<var>sample_aspect_ratio.den</var>:<var>scale_params</var>
+</p>
+<p>All the parameters but <var>scale_params</var> need to be explicitely
+defined.
+</p>
+<p>Follows the list of the accepted parameters.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>width, height</samp>’</dt>
+<dd><p>Specify the width and height of the buffered video frames.
+</p>
+</dd>
+<dt> ‘<samp>pix_fmt_string</samp>’</dt>
+<dd><p>A string representing the pixel format of the buffered video frames.
+It may be a number corresponding to a pixel format, or a pixel format
+name.
+</p>
+</dd>
+<dt> ‘<samp>timebase_num, timebase_den</samp>’</dt>
+<dd><p>Specify numerator and denomitor of the timebase assumed by the
+timestamps of the buffered frames.
+</p>
+</dd>
+<dt> ‘<samp>sample_aspect_ratio.num, sample_aspect_ratio.den</samp>’</dt>
+<dd><p>Specify numerator and denominator of the sample aspect ratio assumed
+by the video frames.
+</p>
+</dd>
+<dt> ‘<samp>scale_params</samp>’</dt>
+<dd><p>Specify the optional parameters to be used for the scale filter which
+is automatically inserted when an input change is detected in the
+input size or format.
+</p></dd>
+</dl>
+
+<p>For example:
+</p><table><tr><td> </td><td><pre class="example">buffer=320:240:yuv410p:1:24:1:1
+</pre></td></tr></table>
+
+<p>will instruct the source to accept video frames with size 320x240 and
+with format "yuv410p", assuming 1/24 as the timestamps timebase and
+square pixels (1:1 sample aspect ratio).
+Since the pixel format with name "yuv410p" corresponds to the number 6
+(check the enum PixelFormat definition in ‘<tt>libavutil/pixfmt.h</tt>’),
+this example corresponds to:
+</p><table><tr><td> </td><td><pre class="example">buffer=320:240:6:1:24:1:1
+</pre></td></tr></table>
+
+<a name="color"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-color">23.2 color</a></h2>
+
+<p>Provide an uniformly colored input.
+</p>
+<p>It accepts the following parameters:
+<var>color</var>:<var>frame_size</var>:<var>frame_rate</var>
+</p>
+<p>Follows the description of the accepted parameters.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>color</samp>’</dt>
+<dd><p>Specify the color of the source. It can be the name of a color (case
+insensitive match) or a 0xRRGGBB[AA] sequence, possibly followed by an
+alpha specifier. The default value is "black".
+</p>
+</dd>
+<dt> ‘<samp>frame_size</samp>’</dt>
+<dd><p>Specify the size of the sourced video, it may be a string of the form
+<var>width</var>x<var>heigth</var>, or the name of a size abbreviation. The
+default value is "320x240".
+</p>
+</dd>
+<dt> ‘<samp>frame_rate</samp>’</dt>
+<dd><p>Specify the frame rate of the sourced video, as the number of frames
+generated per second. It has to be a string in the format
+<var>frame_rate_num</var>/<var>frame_rate_den</var>, an integer number, a float
+number or a valid video frame rate abbreviation. The default value is
+"25".
+</p>
+</dd>
+</dl>
+
+<p>For example the following graph description will generate a red source
+with an opacity of 0.2, with size "qcif" and a frame rate of 10
+frames per second, which will be overlayed over the source connected
+to the pad with identifier "in".
+</p>
+<table><tr><td> </td><td><pre class="example">"color=red@0.2:qcif:10 [color]; [in][color] overlay [out]"
+</pre></td></tr></table>
+
+<a name="movie"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-movie">23.3 movie</a></h2>
+
+<p>Read a video stream from a movie container.
+</p>
+<p>It accepts the syntax: <var>movie_name</var>[:<var>options</var>] where
+<var>movie_name</var> is the name of the resource to read (not necessarily
+a file but also a device or a stream accessed through some protocol),
+and <var>options</var> is an optional sequence of <var>key</var>=<var>value</var>
+pairs, separated by ":".
+</p>
+<p>The description of the accepted options follows.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>format_name, f</samp>’</dt>
+<dd><p>Specifies the format assumed for the movie to read, and can be either
+the name of a container or an input device. If not specified the
+format is guessed from <var>movie_name</var> or by probing.
+</p>
+</dd>
+<dt> ‘<samp>seek_point, sp</samp>’</dt>
+<dd><p>Specifies the seek point in seconds, the frames will be output
+starting from this seek point, the parameter is evaluated with
+<code>av_strtod</code> so the numerical value may be suffixed by an IS
+postfix. Default value is "0".
+</p>
+</dd>
+<dt> ‘<samp>stream_index, si</samp>’</dt>
+<dd><p>Specifies the index of the video stream to read. If the value is -1,
+the best suited video stream will be automatically selected. Default
+value is "-1".
+</p>
+</dd>
+</dl>
+
+<p>This filter allows to overlay a second video on top of main input of
+a filtergraph as shown in this graph:
+</p><table><tr><td> </td><td><pre class="example">input -----------> deltapts0 --> overlay --> output
+ ^
+ |
+movie --> scale--> deltapts1 -------+
+</pre></td></tr></table>
+
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># skip 3.2 seconds from the start of the avi file in.avi, and overlay it
+# on top of the input labelled as "in".
+movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
+[in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
+
+# read from a video4linux2 device, and overlay it on top of the input
+# labelled as "in"
+movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
+[in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
+
+</pre></td></tr></table>
+
+<a name="nullsrc"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-nullsrc">23.4 nullsrc</a></h2>
+
+<p>Null video source, never return images. It is mainly useful as a
+template and to be employed in analysis / debugging tools.
+</p>
+<p>It accepts as optional parameter a string of the form
+<var>width</var>:<var>height</var>:<var>timebase</var>.
+</p>
+<p><var>width</var> and <var>height</var> specify the size of the configured
+source. The default values of <var>width</var> and <var>height</var> are
+respectively 352 and 288 (corresponding to the CIF size format).
+</p>
+<p><var>timebase</var> specifies an arithmetic expression representing a
+timebase. The expression can contain the constants "PI", "E", "PHI",
+"AVTB" (the default timebase), and defaults to the value "AVTB".
+</p>
+<a name="frei0r_005fsrc"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-frei0r_005fsrc">23.5 frei0r_src</a></h2>
+
+<p>Provide a frei0r source.
+</p>
+<p>To enable compilation of this filter you need to install the frei0r
+header and configure FFmpeg with –enable-frei0r.
+</p>
+<p>The source supports the syntax:
+</p><table><tr><td> </td><td><pre class="example"><var>size</var>:<var>rate</var>:<var>src_name</var>[{=|:}<var>param1</var>:<var>param2</var>:...:<var>paramN</var>]
+</pre></td></tr></table>
+
+<p><var>size</var> is the size of the video to generate, may be a string of the
+form <var>width</var>x<var>height</var> or a frame size abbreviation.
+<var>rate</var> is the rate of the video to generate, may be a string of
+the form <var>num</var>/<var>den</var> or a frame rate abbreviation.
+<var>src_name</var> is the name to the frei0r source to load. For more
+information regarding frei0r and how to set the parameters read the
+section <a href="#frei0r">frei0r</a> in the description of the video filters.
+</p>
+<p>Some examples follow:
+</p><table><tr><td> </td><td><pre class="example"># generate a frei0r partik0l source with size 200x200 and framerate 10
+# which is overlayed on the overlay filter main input
+frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
+</pre></td></tr></table>
+
+<a name="rgbtestsrc_002c-testsrc"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-rgbtestsrc_002c-testsrc">23.6 rgbtestsrc, testsrc</a></h2>
+
+<p>The <code>rgbtestsrc</code> source generates an RGB test pattern useful for
+detecting RGB vs BGR issues. You should see a red, green and blue
+stripe from top to bottom.
+</p>
+<p>The <code>testsrc</code> source generates a test video pattern, showing a
+color pattern, a scrolling gradient and a timestamp. This is mainly
+intended for testing purposes.
+</p>
+<p>Both sources accept an optional sequence of <var>key</var>=<var>value</var> pairs,
+separated by ":". The description of the accepted options follows.
+</p>
+<dl compact="compact">
+<dt> ‘<samp>size, s</samp>’</dt>
+<dd><p>Specify the size of the sourced video, it may be a string of the form
+<var>width</var>x<var>heigth</var>, or the name of a size abbreviation. The
+default value is "320x240".
+</p>
+</dd>
+<dt> ‘<samp>rate, r</samp>’</dt>
+<dd><p>Specify the frame rate of the sourced video, as the number of frames
+generated per second. It has to be a string in the format
+<var>frame_rate_num</var>/<var>frame_rate_den</var>, an integer number, a float
+number or a valid video frame rate abbreviation. The default value is
+"25".
+</p>
+</dd>
+<dt> ‘<samp>duration</samp>’</dt>
+<dd><p>Set the video duration of the sourced video. The accepted syntax is:
+</p><table><tr><td> </td><td><pre class="example">[-]HH[:MM[:SS[.m...]]]
+[-]S+[.m...]
+</pre></td></tr></table>
+<p>See also the function <code>av_parse_time()</code>.
+</p>
+<p>If not specified, or the expressed duration is negative, the video is
+supposed to be generated forever.
+</p></dd>
+</dl>
+
+<p>For example the following:
+</p><table><tr><td> </td><td><pre class="example">testsrc=duration=5.3:size=qcif:rate=10
+</pre></td></tr></table>
+
+<p>will generate a video with a duration of 5.3 seconds, with size
+176x144 and a framerate of 10 frames per second.
+</p>
+
+<a name="Video-Sinks"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Video-Sinks">24. Video Sinks</a></h1>
+
+<p>Below is a description of the currently available video sinks.
+</p>
+<a name="buffersink"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-buffersink">24.1 buffersink</a></h2>
+
+<p>Buffer video frames, and make them available to the end of the filter
+graph.
+</p>
+<p>This sink is mainly intended for a programmatic use, in particular
+through the interface defined in ‘<tt>libavfilter/vsink_buffer.h</tt>’.
+</p>
+<p>It does not require a string parameter in input, but you need to
+specify a pointer to a list of supported pixel formats terminated by
+-1 in the opaque parameter provided to <code>avfilter_init_filter</code>
+when initializing this sink.
+</p>
+<a name="nullsink"></a>
+<h2 class="section"><a href="ffmpeg.html#toc-nullsink">24.2 nullsink</a></h2>
+
+<p>Null video sink, do absolutely nothing with the input video. It is
+mainly useful as a template and to be employed in analysis / debugging
+tools.
+</p>
+
+<a name="Metadata"></a>
+<h1 class="chapter"><a href="ffmpeg.html#toc-Metadata">25. Metadata</a></h1>
+
+<p>FFmpeg is able to dump metadata from media files into a simple UTF-8-encoded
+INI-like text file and then load it back using the metadata muxer/demuxer.
+</p>
+<p>The file format is as follows:
+</p><ol>
+<li>
+A file consists of a header and a number of metadata tags divided into sections,
+each on its own line.
+
+</li><li>
+The header is a ’;FFMETADATA’ string, followed by a version number (now 1).
+
+</li><li>
+Metadata tags are of the form ’key=value’
+
+</li><li>
+Immediately after header follows global metadata
+
+</li><li>
+After global metadata there may be sections with per-stream/per-chapter
+metadata.
+
+</li><li>
+A section starts with the section name in uppercase (i.e. STREAM or CHAPTER) in
+brackets (’[’, ’]’) and ends with next section or end of file.
+
+</li><li>
+At the beginning of a chapter section there may be an optional timebase to be
+used for start/end values. It must be in form ’TIMEBASE=num/den’, where num and
+den are integers. If the timebase is missing then start/end times are assumed to
+be in milliseconds.
+Next a chapter section must contain chapter start and end times in form
+’START=num’, ’END=num’, where num is a positive integer.
+
+</li><li>
+Empty lines and lines starting with ’;’ or ’#’ are ignored.
+
+</li><li>
+Metadata keys or values containing special characters (’=’, ’;’, ’#’, ’\’ and a
+newline) must be escaped with a backslash ’\’.
+
+</li><li>
+Note that whitespace in metadata (e.g. foo = bar) is considered to be a part of
+the tag (in the example above key is ’foo ’, value is ’ bar’).
+</li></ol>
+
+<p>A ffmetadata file might look like this:
+</p><table><tr><td> </td><td><pre class="example">;FFMETADATA1
+title=bike\\shed
+;this is a comment
+artist=FFmpeg troll team
+
+[CHAPTER]
+TIMEBASE=1/1000
+START=0
+#chapter ends at 0:01:00
+END=60000
+title=chapter \#1
+[STREAM]
+title=multi\
+line
+</pre></td></tr></table>
+
+
+<hr size="1">
+<p>
+ <font size="-1">
+ This document was generated by <em>Kyle Schwarz</em> on <em>July 23, 2011</em> using <a href="http://www.nongnu.org/texi2html/"><em>texi2html 1.82</em></a>.
+ </font>
+ <br>
+
+</p>
+</body>
+</html>