1 @chapter Filtergraph description
2 @c man begin FILTERGRAPH DESCRIPTION
4 A filtergraph is a directed graph of connected filters. It can contain
5 cycles, and there can be multiple links between a pair of
6 filters. Each link has one input pad on one side connecting it to one
7 filter from which it takes its input, and one output pad on the other
8 side connecting it to the one filter accepting its output.
10 Each filter in a filtergraph is an instance of a filter class
11 registered in the application, which defines the features and the
12 number of input and output pads of the filter.
14 A filter with no input pads is called a "source", a filter with no
15 output pads is called a "sink".
17 @section Filtergraph syntax
19 A filtergraph can be represented using a textual representation, which
20 is recognized by the @code{-vf} and @code{-af} options of the ff*
21 tools, and by the @code{avfilter_graph_parse()} function defined in
22 @file{libavfilter/avfiltergraph.h}.
24 A filterchain consists of a sequence of connected filters, each one
25 connected to the previous one in the sequence. A filterchain is
26 represented by a list of ","-separated filter descriptions.
28 A filtergraph consists of a sequence of filterchains. A sequence of
29 filterchains is represented by a list of ";"-separated filterchain
32 A filter is represented by a string of the form:
33 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
35 @var{filter_name} is the name of the filter class of which the
36 described filter is an instance of, and has to be the name of one of
37 the filter classes registered in the program.
38 The name of the filter class is optionally followed by a string
41 @var{arguments} is a string which contains the parameters used to
42 initialize the filter instance, and are described in the filter
45 The list of arguments can be quoted using the character "'" as initial
46 and ending mark, and the character '\' for escaping the characters
47 within the quoted text; otherwise the argument string is considered
48 terminated when the next special character (belonging to the set
49 "[]=;,") is encountered.
51 The name and arguments of the filter are optionally preceded and
52 followed by a list of link labels.
53 A link label allows to name a link and associate it to a filter output
54 or input pad. The preceding labels @var{in_link_1}
55 ... @var{in_link_N}, are associated to the filter input pads,
56 the following labels @var{out_link_1} ... @var{out_link_M}, are
57 associated to the output pads.
59 When two link labels with the same name are found in the
60 filtergraph, a link between the corresponding input and output pad is
63 If an output pad is not labelled, it is linked by default to the first
64 unlabelled input pad of the next filter in the filterchain.
65 For example in the filterchain:
67 nullsrc, split[L1], [L2]overlay, nullsink
69 the split filter instance has two output pads, and the overlay filter
70 instance two input pads. The first output pad of split is labelled
71 "L1", the first input pad of overlay is labelled "L2", and the second
72 output pad of split is linked to the second input pad of overlay,
73 which are both unlabelled.
75 In a complete filterchain all the unlabelled filter input and output
76 pads must be connected. A filtergraph is considered valid if all the
77 filter input and output pads of all the filterchains are connected.
79 Follows a BNF description for the filtergraph syntax:
81 @var{NAME} ::= sequence of alphanumeric characters and '_'
82 @var{LINKLABEL} ::= "[" @var{NAME} "]"
83 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
84 @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
85 @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
86 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
87 @var{FILTERGRAPH} ::= @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
90 @c man end FILTERGRAPH DESCRIPTION
92 @chapter Audio Filters
93 @c man begin AUDIO FILTERS
95 When you configure your FFmpeg build, you can disable any of the
96 existing filters using --disable-filters.
97 The configure output will show the audio filters included in your
100 Below is a description of the currently available audio filters.
104 Convert the input audio format to the specified formats.
106 The filter accepts a string of the form:
107 "@var{sample_format}:@var{channel_layout}:@var{packing_format}".
109 @var{sample_format} specifies the sample format, and can be a string or
110 the corresponding numeric value defined in @file{libavutil/samplefmt.h}.
112 @var{channel_layout} specifies the channel layout, and can be a string
113 or the corresponding number value defined in @file{libavutil/audioconvert.h}.
115 @var{packing_format} specifies the type of packing in output, can be one
116 of "planar" or "packed", or the corresponding numeric values "0" or "1".
118 The special parameter "auto", signifies that the filter will
119 automatically select the output format depending on the output filter.
121 Some examples follow.
125 Convert input to unsigned 8-bit, stereo, packed:
127 aconvert=u8:stereo:packed
131 Convert input to unsigned 8-bit, automatically select out channel layout
134 aconvert=u8:auto:auto
140 Convert the input audio to one of the specified formats. The framework will
141 negotiate the most appropriate format to minimize conversions.
143 The filter accepts three lists of formats, separated by ":", in the form:
144 "@var{sample_formats}:@var{channel_layouts}:@var{packing_formats}".
146 Elements in each list are separated by "," which has to be escaped in the
147 filtergraph specification.
149 The special parameter "all", in place of a list of elements, signifies all
152 Some examples follow:
154 aformat=u8\\,s16:mono:packed
156 aformat=s16:mono\\,stereo:all
161 Pass the audio source unchanged to the output.
165 Resample the input audio to the specified sample rate.
167 The filter accepts exactly one parameter, the output sample rate. If not
168 specified then the filter will automatically convert between its input
169 and output sample rates.
171 For example, to resample the input audio to 44100Hz:
178 Show a line containing various information for each input audio frame.
179 The input audio is not modified.
181 The shown line contains a sequence of key/value pairs of the form
182 @var{key}:@var{value}.
184 A description of each shown parameter follows:
188 sequential number of the input frame, starting from 0
191 presentation TimeStamp of the input frame, expressed as a number of
192 time base units. The time base unit depends on the filter input pad, and
193 is usually 1/@var{sample_rate}.
196 presentation TimeStamp of the input frame, expressed as a number of
200 position of the frame in the input stream, -1 if this information in
201 unavailable and/or meaningless (for example in case of synthetic audio)
207 channel layout description
210 number of samples (per each channel) contained in the filtered frame
213 sample rate for the audio frame
216 if the packing format is planar, 0 if packed
219 Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
222 Adler-32 checksum (printed in hexadecimal) for each input frame plane,
223 expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3} @var{c4} @var{c5}
229 Make audio easier to listen to on headphones.
231 This filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio
232 so that when listened to on headphones the stereo image is moved from
233 inside your head (standard for headphones) to outside and in front of
234 the listener (standard for speakers).
240 Mix channels with specific gain levels. The filter accepts the output
241 channel layout followed by a set of channels definitions.
243 The filter accepts parameters of the form:
244 "@var{l}:@var{outdef}:@var{outdef}:..."
248 output channel layout or number of channels
251 output channel specification, of the form:
252 "@var{out_name}=[@var{gain}*]@var{in_name}[+[@var{gain}*]@var{in_name}...]"
255 output channel to define, either a channel name (FL, FR, etc.) or a channel
256 number (c0, c1, etc.)
259 multiplicative coefficient for the channel, 1 leaving the volume unchanged
262 input channel to use, see out_name for details; it is not possible to mix
263 named and numbered input channels
266 If the `=' in a channel specification is replaced by `<', then the gains for
267 that specification will be renormalized so that the total is 1, thus
268 avoiding clipping noise.
270 For example, if you want to down-mix from stereo to mono, but with a bigger
271 factor for the left channel:
273 pan=1:c0=0.9*c0+0.1*c1
276 A customized down-mix to stereo that works automatically for 3-, 4-, 5- and
279 pan=stereo: FL < FL + 0.5*FC + 0.6*BL + 0.6*SL : FR < FR + 0.5*FC + 0.6*BR + 0.6*SR
282 Note that @file{ffmpeg} integrates a default down-mix (and up-mix) system
283 that should be preferred (see "-ac" option) unless you have very specific
288 Adjust the input audio volume.
290 The filter accepts exactly one parameter @var{vol}, which expresses
291 how the audio volume will be increased or decreased.
293 Output values are clipped to the maximum value.
295 If @var{vol} is expressed as a decimal number, and the output audio
296 volume is given by the relation:
298 @var{output_volume} = @var{vol} * @var{input_volume}
301 If @var{vol} is expressed as a decimal number followed by the string
302 "dB", the value represents the requested change in decibels of the
303 input audio power, and the output audio volume is given by the
306 @var{output_volume} = 10^(@var{vol}/20) * @var{input_volume}
309 Otherwise @var{vol} is considered an expression and its evaluated
310 value is used for computing the output audio volume according to the
313 Default value for @var{vol} is 1.0.
319 Half the input audio volume:
324 The above example is equivalent to:
330 Decrease input audio power by 12 decibels:
336 @c man end AUDIO FILTERS
338 @chapter Audio Sources
339 @c man begin AUDIO SOURCES
341 Below is a description of the currently available audio sources.
345 Buffer audio frames, and make them available to the filter chain.
347 This source is mainly intended for a programmatic use, in particular
348 through the interface defined in @file{libavfilter/asrc_abuffer.h}.
350 It accepts the following mandatory parameters:
351 @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}:@var{packing}
356 The sample rate of the incoming audio buffers.
359 The sample format of the incoming audio buffers.
360 Either a sample format name or its corresponging integer representation from
361 the enum AVSampleFormat in @file{libavutil/samplefmt.h}
364 The channel layout of the incoming audio buffers.
365 Either a channel layout name from channel_layout_map in
366 @file{libavutil/audioconvert.c} or its corresponding integer representation
367 from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
370 Either "packed" or "planar", or their integer representation: 0 or 1
377 abuffer=44100:s16:stereo:planar
380 will instruct the source to accept planar 16bit signed stereo at 44100Hz.
381 Since the sample format with name "s16" corresponds to the number
382 1 and the "stereo" channel layout corresponds to the value 3, this is
390 Generate an audio signal specified by an expression.
392 This source accepts in input one or more expressions (one for each
393 channel), which are evaluated and used to generate a corresponding
396 It accepts the syntax: @var{exprs}[::@var{options}].
397 @var{exprs} is a list of expressions separated by ":", one for each
398 separate channel. The output channel layout depends on the number of
399 provided expressions, up to 8 channels are supported.
401 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
404 The description of the accepted options follows.
409 Set the minimum duration of the sourced audio. See the function
410 @code{av_parse_time()} for the accepted format.
411 Note that the resulting duration may be greater than the specified
412 duration, as the generated audio is always cut at the end of a
415 If not specified, or the expressed duration is negative, the audio is
416 supposed to be generated forever.
419 Set the number of samples per channel per each output frame,
423 Specify the sample rate, default to 44100.
426 Each expression in @var{exprs} can contain the following constants:
430 number of the evaluated sample, starting from 0
433 time of the evaluated sample expressed in seconds, starting from 0
452 Generate a sin signal with frequency of 440 Hz, set sample rate to
455 aevalsrc="sin(440*2*PI*t)::s=8000"
459 Generate white noise:
461 aevalsrc="-2+random(0)"
465 Generate an amplitude modulated signal:
467 aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
471 Generate 2.5 Hz binaural beats on a 360 Hz carrier:
473 aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
480 Read an audio stream from a movie container.
482 It accepts the syntax: @var{movie_name}[:@var{options}] where
483 @var{movie_name} is the name of the resource to read (not necessarily
484 a file but also a device or a stream accessed through some protocol),
485 and @var{options} is an optional sequence of @var{key}=@var{value}
486 pairs, separated by ":".
488 The description of the accepted options follows.
493 Specify the format assumed for the movie to read, and can be either
494 the name of a container or an input device. If not specified the
495 format is guessed from @var{movie_name} or by probing.
498 Specify the seek point in seconds, the frames will be output
499 starting from this seek point, the parameter is evaluated with
500 @code{av_strtod} so the numerical value may be suffixed by an IS
501 postfix. Default value is "0".
503 @item stream_index, si
504 Specify the index of the audio stream to read. If the value is -1,
505 the best suited audio stream will be automatically selected. Default
512 Null audio source, return unprocessed audio frames. It is mainly useful
513 as a template and to be employed in analysis / debugging tools, or as
514 the source for filters which ignore the input data (for example the sox
517 It accepts an optional sequence of @var{key}=@var{value} pairs,
520 The description of the accepted options follows.
525 Specify the sample rate, and defaults to 44100.
527 @item channel_layout, cl
529 Specify the channel layout, and can be either an integer or a string
530 representing a channel layout. The default value of @var{channel_layout}
533 Check the channel_layout_map definition in
534 @file{libavcodec/audioconvert.c} for the mapping between strings and
535 channel layout values.
538 Set the number of samples per requested frames.
542 Follow some examples:
544 # set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
545 anullsrc=r=48000:cl=4
548 anullsrc=r=48000:cl=mono
551 @c man end AUDIO SOURCES
554 @c man begin AUDIO SINKS
556 Below is a description of the currently available audio sinks.
560 Buffer audio frames, and make them available to the end of filter chain.
562 This sink is mainly intended for programmatic use, in particular
563 through the interface defined in @file{libavfilter/buffersink.h}.
565 It requires a pointer to an AVABufferSinkContext structure, which
566 defines the incoming buffers' formats, to be passed as the opaque
567 parameter to @code{avfilter_init_filter} for initialization.
571 Null audio sink, do absolutely nothing with the input audio. It is
572 mainly useful as a template and to be employed in analysis / debugging
575 @c man end AUDIO SINKS
577 @chapter Video Filters
578 @c man begin VIDEO FILTERS
580 When you configure your FFmpeg build, you can disable any of the
581 existing filters using --disable-filters.
582 The configure output will show the video filters included in your
585 Below is a description of the currently available video filters.
589 Draw ASS (Advanced Substation Alpha) subtitles on top of input video
590 using the libass library.
592 To enable compilation of this filter you need to configure FFmpeg with
593 @code{--enable-libass}.
595 This filter accepts in input the name of the ass file to render.
597 For example, to render the file @file{sub.ass} on top of the input
598 video, use the command:
605 Detect frames that are (almost) completely black. Can be useful to
606 detect chapter transitions or commercials. Output lines consist of
607 the frame number of the detected frame, the percentage of blackness,
608 the position in the file if known or -1 and the timestamp in seconds.
610 In order to display the output lines, you need to set the loglevel at
611 least to the AV_LOG_INFO value.
613 The filter accepts the syntax:
615 blackframe[=@var{amount}:[@var{threshold}]]
618 @var{amount} is the percentage of the pixels that have to be below the
619 threshold, and defaults to 98.
621 @var{threshold} is the threshold below which a pixel value is
622 considered black, and defaults to 32.
626 Apply boxblur algorithm to the input video.
628 This filter accepts the parameters:
629 @var{luma_radius}:@var{luma_power}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power}
631 Chroma and alpha parameters are optional, if not specified they default
632 to the corresponding values set for @var{luma_radius} and
635 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
636 the radius in pixels of the box used for blurring the corresponding
637 input plane. They are expressions, and can contain the following
641 the input width and height in pixels
644 the input chroma image width and height in pixels
647 horizontal and vertical chroma subsample values. For example for the
648 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
651 The radius must be a non-negative number, and must not be greater than
652 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
653 and of @code{min(cw,ch)/2} for the chroma planes.
655 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
656 how many times the boxblur filter is applied to the corresponding
659 Some examples follow:
664 Apply a boxblur filter with luma, chroma, and alpha radius
671 Set luma radius to 2, alpha and chroma radius to 0
677 Set luma and chroma radius to a fraction of the video dimension
679 boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
686 Copy the input source unchanged to the output. Mainly useful for
691 Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}.
693 The parameters are expressions containing the following constants:
697 the computed values for @var{x} and @var{y}. They are evaluated for
701 the input width and height
704 same as @var{in_w} and @var{in_h}
707 the output (cropped) width and height
710 same as @var{out_w} and @var{out_h}
713 same as @var{iw} / @var{ih}
716 input sample aspect ratio
719 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
722 horizontal and vertical chroma subsample values. For example for the
723 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
726 the number of input frame, starting from 0
729 the position in the file of the input frame, NAN if unknown
732 timestamp expressed in seconds, NAN if the input timestamp is unknown
736 The @var{out_w} and @var{out_h} parameters specify the expressions for
737 the width and height of the output (cropped) video. They are
738 evaluated just at the configuration of the filter.
740 The default value of @var{out_w} is "in_w", and the default value of
741 @var{out_h} is "in_h".
743 The expression for @var{out_w} may depend on the value of @var{out_h},
744 and the expression for @var{out_h} may depend on @var{out_w}, but they
745 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
746 evaluated after @var{out_w} and @var{out_h}.
748 The @var{x} and @var{y} parameters specify the expressions for the
749 position of the top-left corner of the output (non-cropped) area. They
750 are evaluated for each frame. If the evaluated value is not valid, it
751 is approximated to the nearest valid value.
753 The default value of @var{x} is "(in_w-out_w)/2", and the default
754 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
755 the center of the input image.
757 The expression for @var{x} may depend on @var{y}, and the expression
758 for @var{y} may depend on @var{x}.
760 Follow some examples:
762 # crop the central input area with size 100x100
765 # crop the central input area with size 2/3 of the input video
766 "crop=2/3*in_w:2/3*in_h"
768 # crop the input video central square
771 # delimit the rectangle with the top-left corner placed at position
772 # 100:100 and the right-bottom corner corresponding to the right-bottom
773 # corner of the input image.
774 crop=in_w-100:in_h-100:100:100
776 # crop 10 pixels from the left and right borders, and 20 pixels from
777 # the top and bottom borders
778 "crop=in_w-2*10:in_h-2*20"
780 # keep only the bottom right quarter of the input image
781 "crop=in_w/2:in_h/2:in_w/2:in_h/2"
783 # crop height for getting Greek harmony
784 "crop=in_w:1/PHI*in_w"
787 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
789 # erratic camera effect depending on timestamp
790 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
792 # set x depending on the value of y
793 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
798 Auto-detect crop size.
800 Calculate necessary cropping parameters and prints the recommended
801 parameters through the logging system. The detected dimensions
802 correspond to the non-black area of the input video.
804 It accepts the syntax:
806 cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
812 Threshold, which can be optionally specified from nothing (0) to
813 everything (255), defaults to 24.
816 Value which the width/height should be divisible by, defaults to
817 16. The offset is automatically adjusted to center the video. Use 2 to
818 get only even dimensions (needed for 4:2:2 video). 16 is best when
819 encoding to most video codecs.
822 Counter that determines after how many frames cropdetect will reset
823 the previously detected largest video area and start over to detect
824 the current optimal crop area. Defaults to 0.
826 This can be useful when channel logos distort the video area. 0
827 indicates never reset and return the largest area encountered during
833 Suppress a TV station logo by a simple interpolation of the surrounding
834 pixels. Just set a rectangle covering the logo and watch it disappear
835 (and sometimes something even uglier appear - your mileage may vary).
837 The filter accepts parameters as a string of the form
838 "@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
839 @var{key}=@var{value} pairs, separated by ":".
841 The description of the accepted parameters follows.
846 Specify the top left corner coordinates of the logo. They must be
850 Specify the width and height of the logo to clear. They must be
854 Specify the thickness of the fuzzy edge of the rectangle (added to
855 @var{w} and @var{h}). The default value is 4.
858 When set to 1, a green rectangle is drawn on the screen to simplify
859 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
860 @var{band} is set to 4. The default value is 0.
864 Some examples follow.
869 Set a rectangle covering the area with top left corner coordinates 0,0
870 and size 100x77, setting a band of size 10:
876 As the previous example, but use named options:
878 delogo=x=0:y=0:w=100:h=77:band=10
885 Attempt to fix small changes in horizontal and/or vertical shift. This
886 filter helps remove camera shake from hand-holding a camera, bumping a
887 tripod, moving on a vehicle, etc.
889 The filter accepts parameters as a string of the form
890 "@var{x}:@var{y}:@var{w}:@var{h}:@var{rx}:@var{ry}:@var{edge}:@var{blocksize}:@var{contrast}:@var{search}:@var{filename}"
892 A description of the accepted parameters follows.
897 Specify a rectangular area where to limit the search for motion
899 If desired the search for motion vectors can be limited to a
900 rectangular area of the frame defined by its top left corner, width
901 and height. These parameters have the same meaning as the drawbox
902 filter which can be used to visualise the position of the bounding
905 This is useful when simultaneous movement of subjects within the frame
906 might be confused for camera motion by the motion vector search.
908 If any or all of @var{x}, @var{y}, @var{w} and @var{h} are set to -1
909 then the full frame is used. This allows later options to be set
910 without specifying the bounding box for the motion vector search.
912 Default - search the whole frame.
915 Specify the maximum extent of movement in x and y directions in the
916 range 0-64 pixels. Default 16.
919 Specify how to generate pixels to fill blanks at the edge of the
920 frame. An integer from 0 to 3 as follows:
923 Fill zeroes at blank locations
925 Original image at blank locations
927 Extruded edge value at blank locations
929 Mirrored edge at blank locations
932 The default setting is mirror edge at blank locations.
935 Specify the blocksize to use for motion search. Range 4-128 pixels,
939 Specify the contrast threshold for blocks. Only blocks with more than
940 the specified contrast (difference between darkest and lightest
941 pixels) will be considered. Range 1-255, default 125.
944 Specify the search strategy 0 = exhaustive search, 1 = less exhaustive
945 search. Default - exhaustive search.
948 If set then a detailed log of the motion search is written to the
955 Draw a colored box on the input image.
957 It accepts the syntax:
959 drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}
965 Specify the top left corner coordinates of the box. Default to 0.
968 Specify the width and height of the box, if 0 they are interpreted as
969 the input width and height. Default to 0.
972 Specify the color of the box to write, it can be the name of a color
973 (case insensitive match) or a 0xRRGGBB[AA] sequence.
976 Follow some examples:
978 # draw a black box around the edge of the input image
981 # draw a box with color red and an opacity of 50%
982 drawbox=10:20:200:60:red@@0.5"
987 Draw text string or text from specified file on top of video using the
990 To enable compilation of this filter you need to configure FFmpeg with
991 @code{--enable-libfreetype}.
993 The filter also recognizes strftime() sequences in the provided text
994 and expands them accordingly. Check the documentation of strftime().
996 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
999 The description of the accepted parameters follows.
1004 The font file to be used for drawing text. Path must be included.
1005 This parameter is mandatory.
1008 The text string to be drawn. The text must be a sequence of UTF-8
1010 This parameter is mandatory if no file is specified with the parameter
1014 A text file containing text to be drawn. The text must be a sequence
1015 of UTF-8 encoded characters.
1017 This parameter is mandatory if no text string is specified with the
1018 parameter @var{text}.
1020 If both text and textfile are specified, an error is thrown.
1023 The expressions which specify the offsets where text will be drawn
1024 within the video frame. They are relative to the top/left border of the
1027 The default value of @var{x} and @var{y} is "0".
1029 See below for the list of accepted constants.
1032 The font size to be used for drawing text.
1033 The default value of @var{fontsize} is 16.
1036 The color to be used for drawing fonts.
1037 Either a string (e.g. "red") or in 0xRRGGBB[AA] format
1038 (e.g. "0xff000033"), possibly followed by an alpha specifier.
1039 The default value of @var{fontcolor} is "black".
1042 The color to be used for drawing box around text.
1043 Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
1044 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1045 The default value of @var{boxcolor} is "white".
1048 Used to draw a box around text using background color.
1049 Value should be either 1 (enable) or 0 (disable).
1050 The default value of @var{box} is 0.
1052 @item shadowx, shadowy
1053 The x and y offsets for the text shadow position with respect to the
1054 position of the text. They can be either positive or negative
1055 values. Default value for both is "0".
1058 The color to be used for drawing a shadow behind the drawn text. It
1059 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
1060 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1061 The default value of @var{shadowcolor} is "black".
1064 Flags to be used for loading the fonts.
1066 The flags map the corresponding flags supported by libfreetype, and are
1067 a combination of the following values:
1074 @item vertical_layout
1075 @item force_autohint
1078 @item ignore_global_advance_width
1080 @item ignore_transform
1087 Default value is "render".
1089 For more information consult the documentation for the FT_LOAD_*
1093 The size in number of spaces to use for rendering the tab.
1097 The parameters for @var{x} and @var{y} are expressions containing the
1098 following constants:
1102 the input width and height
1105 the width of the rendered text
1108 the height of the rendered text
1111 the height of each text line
1114 input sample aspect ratio
1117 input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
1120 horizontal and vertical chroma subsample values. For example for the
1121 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1124 maximum glyph width, that is the maximum width for all the glyphs
1125 contained in the rendered text
1128 maximum glyph height, that is the maximum height for all the glyphs
1129 contained in the rendered text, it is equivalent to @var{ascent} -
1132 @item max_glyph_a, ascent
1134 the maximum distance from the baseline to the highest/upper grid
1135 coordinate used to place a glyph outline point, for all the rendered
1137 It is a positive value, due to the grid's orientation with the Y axis
1140 @item max_glyph_d, descent
1141 the maximum distance from the baseline to the lowest grid coordinate
1142 used to place a glyph outline point, for all the rendered glyphs.
1143 This is a negative value, due to the grid's orientation, with the Y axis
1147 the number of input frame, starting from 0
1150 timestamp expressed in seconds, NAN if the input timestamp is unknown
1153 Some examples follow.
1158 Draw "Test Text" with font FreeSerif, using the default values for the
1159 optional parameters.
1162 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
1166 Draw 'Test Text' with font FreeSerif of size 24 at position x=100
1167 and y=50 (counting from the top-left corner of the screen), text is
1168 yellow with a red box around it. Both the text and the box have an
1172 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
1173 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
1176 Note that the double quotes are not necessary if spaces are not used
1177 within the parameter list.
1180 Show the text at the center of the video frame:
1182 drawtext=fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
1186 Show a text line sliding from right to left in the last row of the video
1187 frame. The file @file{LONG_LINE} is assumed to contain a single line
1190 drawtext=fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t
1194 Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
1196 drawtext=fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
1200 Draw a single green letter "g", at the center of the input video.
1201 The glyph baseline is placed at half screen height.
1203 drawtext=fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent
1208 For more information about libfreetype, check:
1209 @url{http://www.freetype.org/}.
1213 Apply fade-in/out effect to input video.
1215 It accepts the parameters:
1216 @var{type}:@var{start_frame}:@var{nb_frames}[:@var{options}]
1218 @var{type} specifies if the effect type, can be either "in" for
1219 fade-in, or "out" for a fade-out effect.
1221 @var{start_frame} specifies the number of the start frame for starting
1222 to apply the fade effect.
1224 @var{nb_frames} specifies the number of frames for which the fade
1225 effect has to last. At the end of the fade-in effect the output video
1226 will have the same intensity as the input video, at the end of the
1227 fade-out transition the output video will be completely black.
1229 @var{options} is an optional sequence of @var{key}=@var{value} pairs,
1230 separated by ":". The description of the accepted options follows.
1237 @item start_frame, s
1238 See @var{start_frame}.
1241 See @var{nb_frames}.
1244 If set to 1, fade only alpha channel, if one exists on the input.
1248 A few usage examples follow, usable too as test scenarios.
1250 # fade in first 30 frames of video
1253 # fade out last 45 frames of a 200-frame video
1256 # fade in first 25 frames and fade out last 25 frames of a 1000-frame video
1257 fade=in:0:25, fade=out:975:25
1259 # make first 5 frames black, then fade in from frame 5-24
1262 # fade in alpha over first 25 frames of video
1263 fade=in:0:25:alpha=1
1268 Transform the field order of the input video.
1270 It accepts one parameter which specifies the required field order that
1271 the input interlaced video will be transformed to. The parameter can
1272 assume one of the following values:
1276 output bottom field first
1278 output top field first
1281 Default value is "tff".
1283 Transformation is achieved by shifting the picture content up or down
1284 by one line, and filling the remaining line with appropriate picture content.
1285 This method is consistent with most broadcast field order converters.
1287 If the input video is not flagged as being interlaced, or it is already
1288 flagged as being of the required output field order then this filter does
1289 not alter the incoming video.
1291 This filter is very useful when converting to or from PAL DV material,
1292 which is bottom field first.
1296 ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
1301 Buffer input images and send them when they are requested.
1303 This filter is mainly useful when auto-inserted by the libavfilter
1306 The filter does not take parameters.
1310 Convert the input video to one of the specified pixel formats.
1311 Libavfilter will try to pick one that is supported for the input to
1314 The filter accepts a list of pixel format names, separated by ":",
1315 for example "yuv420p:monow:rgb24".
1317 Some examples follow:
1319 # convert the input video to the format "yuv420p"
1322 # convert the input video to any of the formats in the list
1323 format=yuv420p:yuv444p:yuv410p
1329 Apply a frei0r effect to the input video.
1331 To enable compilation of this filter you need to install the frei0r
1332 header and configure FFmpeg with --enable-frei0r.
1334 The filter supports the syntax:
1336 @var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
1339 @var{filter_name} is the name to the frei0r effect to load. If the
1340 environment variable @env{FREI0R_PATH} is defined, the frei0r effect
1341 is searched in each one of the directories specified by the colon
1342 separated list in @env{FREIOR_PATH}, otherwise in the standard frei0r
1343 paths, which are in this order: @file{HOME/.frei0r-1/lib/},
1344 @file{/usr/local/lib/frei0r-1/}, @file{/usr/lib/frei0r-1/}.
1346 @var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
1347 for the frei0r effect.
1349 A frei0r effect parameter can be a boolean (whose values are specified
1350 with "y" and "n"), a double, a color (specified by the syntax
1351 @var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
1352 numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
1353 description), a position (specified by the syntax @var{X}/@var{Y},
1354 @var{X} and @var{Y} being float numbers) and a string.
1356 The number and kind of parameters depend on the loaded effect. If an
1357 effect parameter is not specified the default value is set.
1359 Some examples follow:
1361 # apply the distort0r effect, set the first two double parameters
1362 frei0r=distort0r:0.5:0.01
1364 # apply the colordistance effect, takes a color as first parameter
1365 frei0r=colordistance:0.2/0.3/0.4
1366 frei0r=colordistance:violet
1367 frei0r=colordistance:0x112233
1369 # apply the perspective effect, specify the top left and top right
1371 frei0r=perspective:0.2/0.2:0.8/0.2
1374 For more information see:
1375 @url{http://piksel.org/frei0r}
1379 Fix the banding artifacts that are sometimes introduced into nearly flat
1380 regions by truncation to 8bit color depth.
1381 Interpolate the gradients that should go where the bands are, and
1384 This filter is designed for playback only. Do not use it prior to
1385 lossy compression, because compression tends to lose the dither and
1386 bring back the bands.
1388 The filter takes two optional parameters, separated by ':':
1389 @var{strength}:@var{radius}
1391 @var{strength} is the maximum amount by which the filter will change
1392 any one pixel. Also the threshold for detecting nearly flat
1393 regions. Acceptable values range from .51 to 255, default value is
1394 1.2, out-of-range values will be clipped to the valid range.
1396 @var{radius} is the neighborhood to fit the gradient to. A larger
1397 radius makes for smoother gradients, but also prevents the filter from
1398 modifying the pixels near detailed regions. Acceptable values are
1399 8-32, default value is 16, out-of-range values will be clipped to the
1403 # default parameters
1412 Flip the input video horizontally.
1414 For example to horizontally flip the input video with @command{ffmpeg}:
1416 ffmpeg -i in.avi -vf "hflip" out.avi
1421 High precision/quality 3d denoise filter. This filter aims to reduce
1422 image noise producing smooth images and making still images really
1423 still. It should enhance compressibility.
1425 It accepts the following optional parameters:
1426 @var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
1430 a non-negative float number which specifies spatial luma strength,
1433 @item chroma_spatial
1434 a non-negative float number which specifies spatial chroma strength,
1435 defaults to 3.0*@var{luma_spatial}/4.0
1438 a float number which specifies luma temporal strength, defaults to
1439 6.0*@var{luma_spatial}/4.0
1442 a float number which specifies chroma temporal strength, defaults to
1443 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
1446 @section lut, lutrgb, lutyuv
1448 Compute a look-up table for binding each pixel component input value
1449 to an output value, and apply it to input video.
1451 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
1452 to an RGB input video.
1454 These filters accept in input a ":"-separated list of options, which
1455 specify the expressions used for computing the lookup table for the
1456 corresponding pixel component values.
1458 The @var{lut} filter requires either YUV or RGB pixel formats in
1459 input, and accepts the options:
1462 first pixel component
1464 second pixel component
1466 third pixel component
1468 fourth pixel component, corresponds to the alpha component
1471 The exact component associated to each option depends on the format in
1474 The @var{lutrgb} filter requires RGB pixel formats in input, and
1475 accepts the options:
1487 The @var{lutyuv} filter requires YUV pixel formats in input, and
1488 accepts the options:
1491 Y/luminance component
1500 The expressions can contain the following constants and functions:
1504 the input width and height
1507 input value for the pixel component
1510 the input value clipped in the @var{minval}-@var{maxval} range
1513 maximum value for the pixel component
1516 minimum value for the pixel component
1519 the negated value for the pixel component value clipped in the
1520 @var{minval}-@var{maxval} range , it corresponds to the expression
1521 "maxval-clipval+minval"
1524 the computed value in @var{val} clipped in the
1525 @var{minval}-@var{maxval} range
1527 @item gammaval(gamma)
1528 the computed gamma correction value of the pixel component value
1529 clipped in the @var{minval}-@var{maxval} range, corresponds to the
1531 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
1535 All expressions default to "val".
1537 Some examples follow:
1539 # negate input video
1540 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
1541 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
1543 # the above is the same as
1544 lutrgb="r=negval:g=negval:b=negval"
1545 lutyuv="y=negval:u=negval:v=negval"
1550 # remove chroma components, turns the video into a graytone image
1551 lutyuv="u=128:v=128"
1553 # apply a luma burning effect
1556 # remove green and blue components
1559 # set a constant alpha channel value on input
1560 format=rgba,lutrgb=a="maxval-minval/2"
1562 # correct luminance gamma by a 0.5 factor
1563 lutyuv=y=gammaval(0.5)
1568 Apply an MPlayer filter to the input video.
1570 This filter provides a wrapper around most of the filters of
1573 This wrapper is considered experimental. Some of the wrapped filters
1574 may not work properly and we may drop support for them, as they will
1575 be implemented natively into FFmpeg. Thus you should avoid
1576 depending on them when writing portable scripts.
1578 The filters accepts the parameters:
1579 @var{filter_name}[:=]@var{filter_params}
1581 @var{filter_name} is the name of a supported MPlayer filter,
1582 @var{filter_params} is a string containing the parameters accepted by
1585 The list of the currently supported filters follows:
1639 The parameter syntax and behavior for the listed filters are the same
1640 of the corresponding MPlayer filters. For detailed instructions check
1641 the "VIDEO FILTERS" section in the MPlayer manual.
1643 Some examples follow:
1645 # remove a logo by interpolating the surrounding pixels
1646 mp=delogo=200:200:80:20:1
1648 # adjust gamma, brightness, contrast
1651 # tweak hue and saturation
1655 See also mplayer(1), @url{http://www.mplayerhq.hu/}.
1661 This filter accepts an integer in input, if non-zero it negates the
1662 alpha component (if available). The default value in input is 0.
1666 Force libavfilter not to use any of the specified pixel formats for the
1667 input to the next filter.
1669 The filter accepts a list of pixel format names, separated by ":",
1670 for example "yuv420p:monow:rgb24".
1672 Some examples follow:
1674 # force libavfilter to use a format different from "yuv420p" for the
1675 # input to the vflip filter
1676 noformat=yuv420p,vflip
1678 # convert the input video to any of the formats not contained in the list
1679 noformat=yuv420p:yuv444p:yuv410p
1684 Pass the video source unchanged to the output.
1688 Apply video transform using libopencv.
1690 To enable this filter install libopencv library and headers and
1691 configure FFmpeg with --enable-libopencv.
1693 The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
1695 @var{filter_name} is the name of the libopencv filter to apply.
1697 @var{filter_params} specifies the parameters to pass to the libopencv
1698 filter. If not specified the default values are assumed.
1700 Refer to the official libopencv documentation for more precise
1702 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
1704 Follows the list of supported libopencv filters.
1709 Dilate an image by using a specific structuring element.
1710 This filter corresponds to the libopencv function @code{cvDilate}.
1712 It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
1714 @var{struct_el} represents a structuring element, and has the syntax:
1715 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
1717 @var{cols} and @var{rows} represent the number of columns and rows of
1718 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
1719 point, and @var{shape} the shape for the structuring element, and
1720 can be one of the values "rect", "cross", "ellipse", "custom".
1722 If the value for @var{shape} is "custom", it must be followed by a
1723 string of the form "=@var{filename}". The file with name
1724 @var{filename} is assumed to represent a binary image, with each
1725 printable character corresponding to a bright pixel. When a custom
1726 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
1727 or columns and rows of the read file are assumed instead.
1729 The default value for @var{struct_el} is "3x3+0x0/rect".
1731 @var{nb_iterations} specifies the number of times the transform is
1732 applied to the image, and defaults to 1.
1734 Follow some example:
1736 # use the default values
1739 # dilate using a structuring element with a 5x5 cross, iterate two times
1740 ocv=dilate=5x5+2x2/cross:2
1742 # read the shape from the file diamond.shape, iterate two times
1743 # the file diamond.shape may contain a pattern of characters like this:
1749 # the specified cols and rows are ignored (but not the anchor point coordinates)
1750 ocv=0x0+2x2/custom=diamond.shape:2
1755 Erode an image by using a specific structuring element.
1756 This filter corresponds to the libopencv function @code{cvErode}.
1758 The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
1759 with the same syntax and semantics as the @ref{dilate} filter.
1763 Smooth the input video.
1765 The filter takes the following parameters:
1766 @var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
1768 @var{type} is the type of smooth filter to apply, and can be one of
1769 the following values: "blur", "blur_no_scale", "median", "gaussian",
1770 "bilateral". The default value is "gaussian".
1772 @var{param1}, @var{param2}, @var{param3}, and @var{param4} are
1773 parameters whose meanings depend on smooth type. @var{param1} and
1774 @var{param2} accept integer positive values or 0, @var{param3} and
1775 @var{param4} accept float values.
1777 The default value for @var{param1} is 3, the default value for the
1778 other parameters is 0.
1780 These parameters correspond to the parameters assigned to the
1781 libopencv function @code{cvSmooth}.
1786 Overlay one video on top of another.
1788 It takes two inputs and one output, the first input is the "main"
1789 video on which the second input is overlayed.
1791 It accepts the parameters: @var{x}:@var{y}[:@var{options}].
1793 @var{x} is the x coordinate of the overlayed video on the main video,
1794 @var{y} is the y coordinate. @var{x} and @var{y} are expressions containing
1795 the following parameters:
1798 @item main_w, main_h
1799 main input width and height
1802 same as @var{main_w} and @var{main_h}
1804 @item overlay_w, overlay_h
1805 overlay input width and height
1808 same as @var{overlay_w} and @var{overlay_h}
1811 @var{options} is an optional list of @var{key}=@var{value} pairs,
1814 The description of the accepted options follows.
1818 If set to 1, force the filter to accept inputs in the RGB
1819 color space. Default value is 0.
1822 Be aware that frames are taken from each input video in timestamp
1823 order, hence, if their initial timestamps differ, it is a a good idea
1824 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
1825 have them begin in the same zero timestamp, as it does the example for
1826 the @var{movie} filter.
1828 Follow some examples:
1830 # draw the overlay at 10 pixels from the bottom right
1831 # corner of the main video.
1832 overlay=main_w-overlay_w-10:main_h-overlay_h-10
1834 # insert a transparent PNG logo in the bottom left corner of the input
1835 movie=logo.png [logo];
1836 [in][logo] overlay=10:main_h-overlay_h-10 [out]
1838 # insert 2 different transparent PNG logos (second logo on bottom
1840 movie=logo1.png [logo1];
1841 movie=logo2.png [logo2];
1842 [in][logo1] overlay=10:H-h-10 [in+logo1];
1843 [in+logo1][logo2] overlay=W-w-10:H-h-10 [out]
1845 # add a transparent color layer on top of the main video,
1846 # WxH specifies the size of the main input to the overlay filter
1847 color=red@.3:WxH [over]; [in][over] overlay [out]
1850 You can chain together more overlays but the efficiency of such
1851 approach is yet to be tested.
1855 Add paddings to the input image, and places the original input at the
1856 given coordinates @var{x}, @var{y}.
1858 It accepts the following parameters:
1859 @var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
1861 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
1862 expressions containing the following constants:
1866 the input video width and height
1869 same as @var{in_w} and @var{in_h}
1872 the output width and height, that is the size of the padded area as
1873 specified by the @var{width} and @var{height} expressions
1876 same as @var{out_w} and @var{out_h}
1879 x and y offsets as specified by the @var{x} and @var{y}
1880 expressions, or NAN if not yet specified
1883 same as @var{iw} / @var{ih}
1886 input sample aspect ratio
1889 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1892 horizontal and vertical chroma subsample values. For example for the
1893 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1896 Follows the description of the accepted parameters.
1901 Specify the size of the output image with the paddings added. If the
1902 value for @var{width} or @var{height} is 0, the corresponding input size
1903 is used for the output.
1905 The @var{width} expression can reference the value set by the
1906 @var{height} expression, and vice versa.
1908 The default value of @var{width} and @var{height} is 0.
1912 Specify the offsets where to place the input image in the padded area
1913 with respect to the top/left border of the output image.
1915 The @var{x} expression can reference the value set by the @var{y}
1916 expression, and vice versa.
1918 The default value of @var{x} and @var{y} is 0.
1922 Specify the color of the padded area, it can be the name of a color
1923 (case insensitive match) or a 0xRRGGBB[AA] sequence.
1925 The default value of @var{color} is "black".
1929 Some examples follow:
1932 # Add paddings with color "violet" to the input video. Output video
1933 # size is 640x480, the top-left corner of the input video is placed at
1935 pad=640:480:0:40:violet
1937 # pad the input to get an output with dimensions increased bt 3/2,
1938 # and put the input video at the center of the padded area
1939 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
1941 # pad the input to get a squared output with size equal to the maximum
1942 # value between the input width and height, and put the input video at
1943 # the center of the padded area
1944 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
1946 # pad the input to get a final w/h ratio of 16:9
1947 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
1949 # for anamorphic video, in order to set the output display aspect ratio,
1950 # it is necessary to use sar in the expression, according to the relation:
1951 # (ih * X / ih) * sar = output_dar
1952 # X = output_dar / sar
1953 pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
1955 # double output size and put the input video in the bottom-right
1956 # corner of the output padded area
1957 pad="2*iw:2*ih:ow-iw:oh-ih"
1960 @section pixdesctest
1962 Pixel format descriptor test filter, mainly useful for internal
1963 testing. The output video should be equal to the input video.
1967 format=monow, pixdesctest
1970 can be used to test the monowhite pixel format descriptor definition.
1974 Scale the input video to @var{width}:@var{height}[:@var{interl}=@{1|-1@}] and/or convert the image format.
1976 The parameters @var{width} and @var{height} are expressions containing
1977 the following constants:
1981 the input width and height
1984 same as @var{in_w} and @var{in_h}
1987 the output (cropped) width and height
1990 same as @var{out_w} and @var{out_h}
1993 same as @var{iw} / @var{ih}
1996 input sample aspect ratio
1999 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
2002 input sample aspect ratio
2005 horizontal and vertical chroma subsample values. For example for the
2006 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2009 If the input image format is different from the format requested by
2010 the next filter, the scale filter will convert the input to the
2013 If the value for @var{width} or @var{height} is 0, the respective input
2014 size is used for the output.
2016 If the value for @var{width} or @var{height} is -1, the scale filter will
2017 use, for the respective output size, a value that maintains the aspect
2018 ratio of the input image.
2020 The default value of @var{width} and @var{height} is 0.
2022 Valid values for the optional parameter @var{interl} are:
2026 force interlaced aware scaling
2029 select interlaced aware scaling depending on whether the source frames
2030 are flagged as interlaced or not
2033 Some examples follow:
2035 # scale the input video to a size of 200x100.
2038 # scale the input to 2x
2040 # the above is the same as
2043 # scale the input to half size
2046 # increase the width, and set the height to the same size
2049 # seek for Greek harmony
2053 # increase the height, and set the width to 3/2 of the height
2056 # increase the size, but make the size a multiple of the chroma
2057 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
2059 # increase the width to a maximum of 500 pixels, keep the same input aspect ratio
2060 scale='min(500\, iw*3/2):-1'
2064 Select frames to pass in output.
2066 It accepts in input an expression, which is evaluated for each input
2067 frame. If the expression is evaluated to a non-zero value, the frame
2068 is selected and passed to the output, otherwise it is discarded.
2070 The expression can contain the following constants:
2074 the sequential number of the filtered frame, starting from 0
2077 the sequential number of the selected frame, starting from 0
2079 @item prev_selected_n
2080 the sequential number of the last selected frame, NAN if undefined
2083 timebase of the input timestamps
2086 the PTS (Presentation TimeStamp) of the filtered video frame,
2087 expressed in @var{TB} units, NAN if undefined
2090 the PTS (Presentation TimeStamp) of the filtered video frame,
2091 expressed in seconds, NAN if undefined
2094 the PTS of the previously filtered video frame, NAN if undefined
2096 @item prev_selected_pts
2097 the PTS of the last previously filtered video frame, NAN if undefined
2099 @item prev_selected_t
2100 the PTS of the last previously selected video frame, NAN if undefined
2103 the PTS of the first video frame in the video, NAN if undefined
2106 the time of the first video frame in the video, NAN if undefined
2109 the type of the filtered frame, can assume one of the following
2121 @item interlace_type
2122 the frame interlace type, can assume one of the following values:
2125 the frame is progressive (not interlaced)
2127 the frame is top-field-first
2129 the frame is bottom-field-first
2133 1 if the filtered frame is a key-frame, 0 otherwise
2136 the position in the file of the filtered frame, -1 if the information
2137 is not available (e.g. for synthetic video)
2140 The default value of the select expression is "1".
2142 Some examples follow:
2145 # select all frames in input
2148 # the above is the same as:
2154 # select only I-frames
2155 select='eq(pict_type\,I)'
2157 # select one frame every 100
2158 select='not(mod(n\,100))'
2160 # select only frames contained in the 10-20 time interval
2161 select='gte(t\,10)*lte(t\,20)'
2163 # select only I frames contained in the 10-20 time interval
2164 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
2166 # select frames with a minimum distance of 10 seconds
2167 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
2173 Set the Display Aspect Ratio for the filter output video.
2175 This is done by changing the specified Sample (aka Pixel) Aspect
2176 Ratio, according to the following equation:
2177 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2179 Keep in mind that this filter does not modify the pixel dimensions of
2180 the video frame. Also the display aspect ratio set by this filter may
2181 be changed by later filters in the filterchain, e.g. in case of
2182 scaling or if another "setdar" or a "setsar" filter is applied.
2184 The filter accepts a parameter string which represents the wanted
2185 display aspect ratio.
2186 The parameter can be a floating point number string, or an expression
2187 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
2188 numerator and denominator of the aspect ratio.
2189 If the parameter is not specified, it is assumed the value "0:1".
2191 For example to change the display aspect ratio to 16:9, specify:
2194 # the above is equivalent to
2198 See also the @ref{setsar} filter documentation.
2202 Change the PTS (presentation timestamp) of the input video frames.
2204 Accept in input an expression evaluated through the eval API, which
2205 can contain the following constants:
2209 the presentation timestamp in input
2212 the count of the input frame, starting from 0.
2215 the PTS of the first video frame
2218 tell if the current frame is interlaced
2221 original position in the file of the frame, or undefined if undefined
2222 for the current frame
2232 Some examples follow:
2235 # start counting PTS from zero
2247 # fixed rate 25 fps with some jitter
2248 setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
2254 Set the Sample (aka Pixel) Aspect Ratio for the filter output video.
2256 Note that as a consequence of the application of this filter, the
2257 output display aspect ratio will change according to the following
2259 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2261 Keep in mind that the sample aspect ratio set by this filter may be
2262 changed by later filters in the filterchain, e.g. if another "setsar"
2263 or a "setdar" filter is applied.
2265 The filter accepts a parameter string which represents the wanted
2266 sample aspect ratio.
2267 The parameter can be a floating point number string, or an expression
2268 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
2269 numerator and denominator of the aspect ratio.
2270 If the parameter is not specified, it is assumed the value "0:1".
2272 For example to change the sample aspect ratio to 10:11, specify:
2279 Set the timebase to use for the output frames timestamps.
2280 It is mainly useful for testing timebase configuration.
2282 It accepts in input an arithmetic expression representing a rational.
2283 The expression can contain the constants "AVTB" (the
2284 default timebase), and "intb" (the input timebase).
2286 The default value for the input is "intb".
2288 Follow some examples.
2291 # set the timebase to 1/25
2294 # set the timebase to 1/10
2297 #set the timebase to 1001/1000
2300 #set the timebase to 2*intb
2303 #set the default timebase value
2309 Show a line containing various information for each input video frame.
2310 The input video is not modified.
2312 The shown line contains a sequence of key/value pairs of the form
2313 @var{key}:@var{value}.
2315 A description of each shown parameter follows:
2319 sequential number of the input frame, starting from 0
2322 Presentation TimeStamp of the input frame, expressed as a number of
2323 time base units. The time base unit depends on the filter input pad.
2326 Presentation TimeStamp of the input frame, expressed as a number of
2330 position of the frame in the input stream, -1 if this information in
2331 unavailable and/or meaningless (for example in case of synthetic video)
2337 sample aspect ratio of the input frame, expressed in the form
2341 size of the input frame, expressed in the form
2342 @var{width}x@var{height}
2345 interlaced mode ("P" for "progressive", "T" for top field first, "B"
2346 for bottom field first)
2349 1 if the frame is a key frame, 0 otherwise
2352 picture type of the input frame ("I" for an I-frame, "P" for a
2353 P-frame, "B" for a B-frame, "?" for unknown type).
2354 Check also the documentation of the @code{AVPictureType} enum and of
2355 the @code{av_get_picture_type_char} function defined in
2356 @file{libavutil/avutil.h}.
2359 Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
2361 @item plane_checksum
2362 Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
2363 expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]"
2368 Pass the images of input video on to next video filter as multiple
2372 ffmpeg -i in.avi -vf "slicify=32" out.avi
2375 The filter accepts the slice height as parameter. If the parameter is
2376 not specified it will use the default value of 16.
2378 Adding this in the beginning of filter chains should make filtering
2379 faster due to better use of the memory cache.
2383 Pass on the input video to two outputs. Both outputs are identical to
2388 [in] split [splitout1][splitout2];
2389 [splitout1] crop=100:100:0:0 [cropout];
2390 [splitout2] pad=200:200:100:100 [padout];
2393 will create two separate outputs from the same input, one cropped and
2398 Transpose rows with columns in the input video and optionally flip it.
2400 It accepts a parameter representing an integer, which can assume the
2405 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
2413 Rotate by 90 degrees clockwise, that is:
2421 Rotate by 90 degrees counterclockwise, that is:
2429 Rotate by 90 degrees clockwise and vertically flip, that is:
2439 Sharpen or blur the input video.
2441 It accepts the following parameters:
2442 @var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
2444 Negative values for the amount will blur the input video, while positive
2445 values will sharpen. All parameters are optional and default to the
2446 equivalent of the string '5:5:1.0:5:5:0.0'.
2451 Set the luma matrix horizontal size. It can be an integer between 3
2452 and 13, default value is 5.
2455 Set the luma matrix vertical size. It can be an integer between 3
2456 and 13, default value is 5.
2459 Set the luma effect strength. It can be a float number between -2.0
2460 and 5.0, default value is 1.0.
2462 @item chroma_msize_x
2463 Set the chroma matrix horizontal size. It can be an integer between 3
2464 and 13, default value is 5.
2466 @item chroma_msize_y
2467 Set the chroma matrix vertical size. It can be an integer between 3
2468 and 13, default value is 5.
2471 Set the chroma effect strength. It can be a float number between -2.0
2472 and 5.0, default value is 0.0.
2477 # Strong luma sharpen effect parameters
2480 # Strong blur of both luma and chroma parameters
2481 unsharp=7:7:-2:7:7:-2
2483 # Use the default values with @command{ffmpeg}
2484 ffmpeg -i in.avi -vf "unsharp" out.mp4
2489 Flip the input video vertically.
2492 ffmpeg -i in.avi -vf "vflip" out.avi
2497 Deinterlace the input video ("yadif" means "yet another deinterlacing
2500 It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}.
2502 @var{mode} specifies the interlacing mode to adopt, accepts one of the
2507 output 1 frame for each frame
2509 output 1 frame for each field
2511 like 0 but skips spatial interlacing check
2513 like 1 but skips spatial interlacing check
2518 @var{parity} specifies the picture field parity assumed for the input
2519 interlaced video, accepts one of the following values:
2523 assume top field first
2525 assume bottom field first
2527 enable automatic detection
2530 Default value is -1.
2531 If interlacing is unknown or decoder does not export this information,
2532 top field first will be assumed.
2534 @var{auto} specifies if deinterlacer should trust the interlaced flag
2535 and only deinterlace frames marked as interlaced
2539 deinterlace all frames
2541 only deinterlace frames marked as interlaced
2546 @c man end VIDEO FILTERS
2548 @chapter Video Sources
2549 @c man begin VIDEO SOURCES
2551 Below is a description of the currently available video sources.
2555 Buffer video frames, and make them available to the filter chain.
2557 This source is mainly intended for a programmatic use, in particular
2558 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
2560 It accepts the following parameters:
2561 @var{width}:@var{height}:@var{pix_fmt_string}:@var{timebase_num}:@var{timebase_den}:@var{sample_aspect_ratio_num}:@var{sample_aspect_ratio.den}:@var{scale_params}
2563 All the parameters but @var{scale_params} need to be explicitly
2566 Follows the list of the accepted parameters.
2571 Specify the width and height of the buffered video frames.
2573 @item pix_fmt_string
2574 A string representing the pixel format of the buffered video frames.
2575 It may be a number corresponding to a pixel format, or a pixel format
2578 @item timebase_num, timebase_den
2579 Specify numerator and denomitor of the timebase assumed by the
2580 timestamps of the buffered frames.
2582 @item sample_aspect_ratio.num, sample_aspect_ratio.den
2583 Specify numerator and denominator of the sample aspect ratio assumed
2584 by the video frames.
2587 Specify the optional parameters to be used for the scale filter which
2588 is automatically inserted when an input change is detected in the
2589 input size or format.
2594 buffer=320:240:yuv410p:1:24:1:1
2597 will instruct the source to accept video frames with size 320x240 and
2598 with format "yuv410p", assuming 1/24 as the timestamps timebase and
2599 square pixels (1:1 sample aspect ratio).
2600 Since the pixel format with name "yuv410p" corresponds to the number 6
2601 (check the enum PixelFormat definition in @file{libavutil/pixfmt.h}),
2602 this example corresponds to:
2604 buffer=320:240:6:1:24:1:1
2609 Provide an uniformly colored input.
2611 It accepts the following parameters:
2612 @var{color}:@var{frame_size}:@var{frame_rate}
2614 Follows the description of the accepted parameters.
2619 Specify the color of the source. It can be the name of a color (case
2620 insensitive match) or a 0xRRGGBB[AA] sequence, possibly followed by an
2621 alpha specifier. The default value is "black".
2624 Specify the size of the sourced video, it may be a string of the form
2625 @var{width}x@var{height}, or the name of a size abbreviation. The
2626 default value is "320x240".
2629 Specify the frame rate of the sourced video, as the number of frames
2630 generated per second. It has to be a string in the format
2631 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2632 number or a valid video frame rate abbreviation. The default value is
2637 For example the following graph description will generate a red source
2638 with an opacity of 0.2, with size "qcif" and a frame rate of 10
2639 frames per second, which will be overlayed over the source connected
2640 to the pad with identifier "in".
2643 "color=red@@0.2:qcif:10 [color]; [in][color] overlay [out]"
2648 Read a video stream from a movie container.
2650 It accepts the syntax: @var{movie_name}[:@var{options}] where
2651 @var{movie_name} is the name of the resource to read (not necessarily
2652 a file but also a device or a stream accessed through some protocol),
2653 and @var{options} is an optional sequence of @var{key}=@var{value}
2654 pairs, separated by ":".
2656 The description of the accepted options follows.
2660 @item format_name, f
2661 Specifies the format assumed for the movie to read, and can be either
2662 the name of a container or an input device. If not specified the
2663 format is guessed from @var{movie_name} or by probing.
2665 @item seek_point, sp
2666 Specifies the seek point in seconds, the frames will be output
2667 starting from this seek point, the parameter is evaluated with
2668 @code{av_strtod} so the numerical value may be suffixed by an IS
2669 postfix. Default value is "0".
2671 @item stream_index, si
2672 Specifies the index of the video stream to read. If the value is -1,
2673 the best suited video stream will be automatically selected. Default
2678 This filter allows to overlay a second video on top of main input of
2679 a filtergraph as shown in this graph:
2681 input -----------> deltapts0 --> overlay --> output
2684 movie --> scale--> deltapts1 -------+
2687 Some examples follow:
2689 # skip 3.2 seconds from the start of the avi file in.avi, and overlay it
2690 # on top of the input labelled as "in".
2691 movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2692 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2694 # read from a video4linux2 device, and overlay it on top of the input
2696 movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2697 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2703 Generate various test patterns, as generated by the MPlayer test filter.
2705 The size of the generated video is fixed, and is 256x256.
2706 This source is useful in particular for testing encoding features.
2708 This source accepts an optional sequence of @var{key}=@var{value} pairs,
2709 separated by ":". The description of the accepted options follows.
2714 Specify the frame rate of the sourced video, as the number of frames
2715 generated per second. It has to be a string in the format
2716 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2717 number or a valid video frame rate abbreviation. The default value is
2721 Set the video duration of the sourced video. The accepted syntax is:
2723 [-]HH[:MM[:SS[.m...]]]
2726 See also the function @code{av_parse_time()}.
2728 If not specified, or the expressed duration is negative, the video is
2729 supposed to be generated forever.
2733 Set the number or the name of the test to perform. Supported tests are:
2748 Default value is "all", which will cycle through the list of all tests.
2751 For example the following:
2756 will generate a "dc_luma" test pattern.
2760 Provide a frei0r source.
2762 To enable compilation of this filter you need to install the frei0r
2763 header and configure FFmpeg with --enable-frei0r.
2765 The source supports the syntax:
2767 @var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
2770 @var{size} is the size of the video to generate, may be a string of the
2771 form @var{width}x@var{height} or a frame size abbreviation.
2772 @var{rate} is the rate of the video to generate, may be a string of
2773 the form @var{num}/@var{den} or a frame rate abbreviation.
2774 @var{src_name} is the name to the frei0r source to load. For more
2775 information regarding frei0r and how to set the parameters read the
2776 section @ref{frei0r} in the description of the video filters.
2778 Some examples follow:
2780 # generate a frei0r partik0l source with size 200x200 and frame rate 10
2781 # which is overlayed on the overlay filter main input
2782 frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
2787 Generate a life pattern.
2789 This source is based on a generalization of John Conway's life game.
2791 The sourced input represents a life grid, each pixel represents a cell
2792 which can be in one of two possible states, alive or dead. Every cell
2793 interacts with its eight neighbours, which are the cells that are
2794 horizontally, vertically, or diagonally adjacent.
2796 At each interaction the grid evolves according to the adopted rule,
2797 which specifies the number of neighbor alive cells which will make a
2798 cell stay alive or born. The @option{rule} option allows to specify
2801 This source accepts a list of options in the form of
2802 @var{key}=@var{value} pairs separated by ":". A description of the
2803 accepted options follows.
2807 Set the file from which to read the initial grid state. In the file,
2808 each non-whitespace character is considered an alive cell, and newline
2809 is used to delimit the end of each row.
2811 If this option is not specified, the initial grid is generated
2815 Set the video rate, that is the number of frames generated per second.
2818 @item random_fill_ratio, ratio
2819 Set the random fill ratio for the initial random grid. It is a
2820 floating point number value ranging from 0 to 1, defaults to 1/PHI.
2821 It is ignored when a file is specified.
2823 @item random_seed, seed
2824 Set the seed for filling the initial random grid, must be an integer
2825 included between 0 and UINT32_MAX. If not specified, or if explicitly
2826 set to -1, the filter will try to use a good random seed on a best
2832 A rule can be specified with a code of the kind "S@var{NS}/B@var{NB}",
2833 where @var{NS} and @var{NB} are sequences of numbers in the range 0-8,
2834 @var{NS} specifies the number of alive neighbor cells which make a
2835 live cell stay alive, and @var{NB} the number of alive neighbor cells
2836 which make a dead cell to become alive (i.e. to "born").
2837 "s" and "b" can be used in place of "S" and "B", respectively.
2839 Alternatively a rule can be specified by an 18-bits integer. The 9
2840 high order bits are used to encode the next cell state if it is alive
2841 for each number of neighbor alive cells, the low order bits specify
2842 the rule for "borning" new cells. Higher order bits encode for an
2843 higher number of neighbor cells.
2844 For example the number 6153 = @code{(12<<9)+9} specifies a stay alive
2845 rule of 12 and a born rule of 9, which corresponds to "S23/B03".
2847 Default value is "S23/B3", which is the original Conway's game of life
2848 rule, and will keep a cell alive if it has 2 or 3 neighbor alive
2849 cells, and will born a new cell if there are three alive cells around
2853 Set the size of the output video.
2855 If @option{filename} is specified, the size is set by default to the
2856 same size of the input file. If @option{size} is set, it must contain
2857 the size specified in the input file, and the initial grid defined in
2858 that file is centered in the larger resulting area.
2860 If a filename is not specified, the size value defaults to "320x240"
2861 (used for a randomly generated initial grid).
2864 If set to 1, stitch the left and right grid edges together, and the
2865 top and bottom edges also. Defaults to 1.
2868 @subsection Examples
2872 Read a grid from @file{pattern}, and center it on a grid of size
2875 life=f=pattern:s=300x300
2879 Generate a random grid of size 200x200, with a fill ratio of 2/3:
2881 life=ratio=2/3:s=200x200
2885 Specify a custom rule for evolving a randomly generated grid:
2892 @section nullsrc, rgbtestsrc, testsrc
2894 The @code{nullsrc} source returns unprocessed video frames. It is
2895 mainly useful to be employed in analysis / debugging tools, or as the
2896 source for filters which ignore the input data.
2898 The @code{rgbtestsrc} source generates an RGB test pattern useful for
2899 detecting RGB vs BGR issues. You should see a red, green and blue
2900 stripe from top to bottom.
2902 The @code{testsrc} source generates a test video pattern, showing a
2903 color pattern, a scrolling gradient and a timestamp. This is mainly
2904 intended for testing purposes.
2906 These sources accept an optional sequence of @var{key}=@var{value} pairs,
2907 separated by ":". The description of the accepted options follows.
2912 Specify the size of the sourced video, it may be a string of the form
2913 @var{width}x@var{height}, or the name of a size abbreviation. The
2914 default value is "320x240".
2917 Specify the frame rate of the sourced video, as the number of frames
2918 generated per second. It has to be a string in the format
2919 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2920 number or a valid video frame rate abbreviation. The default value is
2924 Set the sample aspect ratio of the sourced video.
2927 Set the video duration of the sourced video. The accepted syntax is:
2929 [-]HH[:MM[:SS[.m...]]]
2932 See also the function @code{av_parse_time()}.
2934 If not specified, or the expressed duration is negative, the video is
2935 supposed to be generated forever.
2938 For example the following:
2940 testsrc=duration=5.3:size=qcif:rate=10
2943 will generate a video with a duration of 5.3 seconds, with size
2944 176x144 and a frame rate of 10 frames per second.
2946 If the input content is to be ignored, @code{nullsrc} can be used. The
2947 following command generates noise in the luminance plane by employing
2948 the @code{mp=geq} filter:
2950 nullsrc=s=256x256, mp=geq=random(1)*255:128:128
2953 @c man end VIDEO SOURCES
2955 @chapter Video Sinks
2956 @c man begin VIDEO SINKS
2958 Below is a description of the currently available video sinks.
2962 Buffer video frames, and make them available to the end of the filter
2965 This sink is mainly intended for a programmatic use, in particular
2966 through the interface defined in @file{libavfilter/buffersink.h}.
2968 It does not require a string parameter in input, but you need to
2969 specify a pointer to a list of supported pixel formats terminated by
2970 -1 in the opaque parameter provided to @code{avfilter_init_filter}
2971 when initializing this sink.
2975 Null video sink, do absolutely nothing with the input video. It is
2976 mainly useful as a template and to be employed in analysis / debugging
2979 @c man end VIDEO SINKS