1 @chapter Filtergraph description
2 @c man begin FILTERGRAPH DESCRIPTION
4 A filtergraph is a directed graph of connected filters. It can contain
5 cycles, and there can be multiple links between a pair of
6 filters. Each link has one input pad on one side connecting it to one
7 filter from which it takes its input, and one output pad on the other
8 side connecting it to the one filter accepting its output.
10 Each filter in a filtergraph is an instance of a filter class
11 registered in the application, which defines the features and the
12 number of input and output pads of the filter.
14 A filter with no input pads is called a "source", a filter with no
15 output pads is called a "sink".
17 @section Filtergraph syntax
19 A filtergraph can be represented using a textual representation, which
20 is recognized by the @code{-vf} and @code{-af} options of the ff*
21 tools, and by the @code{avfilter_graph_parse()} function defined in
22 @file{libavfilter/avfiltergraph.h}.
24 A filterchain consists of a sequence of connected filters, each one
25 connected to the previous one in the sequence. A filterchain is
26 represented by a list of ","-separated filter descriptions.
28 A filtergraph consists of a sequence of filterchains. A sequence of
29 filterchains is represented by a list of ";"-separated filterchain
32 A filter is represented by a string of the form:
33 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
35 @var{filter_name} is the name of the filter class of which the
36 described filter is an instance of, and has to be the name of one of
37 the filter classes registered in the program.
38 The name of the filter class is optionally followed by a string
41 @var{arguments} is a string which contains the parameters used to
42 initialize the filter instance, and are described in the filter
45 The list of arguments can be quoted using the character "'" as initial
46 and ending mark, and the character '\' for escaping the characters
47 within the quoted text; otherwise the argument string is considered
48 terminated when the next special character (belonging to the set
49 "[]=;,") is encountered.
51 The name and arguments of the filter are optionally preceded and
52 followed by a list of link labels.
53 A link label allows to name a link and associate it to a filter output
54 or input pad. The preceding labels @var{in_link_1}
55 ... @var{in_link_N}, are associated to the filter input pads,
56 the following labels @var{out_link_1} ... @var{out_link_M}, are
57 associated to the output pads.
59 When two link labels with the same name are found in the
60 filtergraph, a link between the corresponding input and output pad is
63 If an output pad is not labelled, it is linked by default to the first
64 unlabelled input pad of the next filter in the filterchain.
65 For example in the filterchain:
67 nullsrc, split[L1], [L2]overlay, nullsink
69 the split filter instance has two output pads, and the overlay filter
70 instance two input pads. The first output pad of split is labelled
71 "L1", the first input pad of overlay is labelled "L2", and the second
72 output pad of split is linked to the second input pad of overlay,
73 which are both unlabelled.
75 In a complete filterchain all the unlabelled filter input and output
76 pads must be connected. A filtergraph is considered valid if all the
77 filter input and output pads of all the filterchains are connected.
79 Follows a BNF description for the filtergraph syntax:
81 @var{NAME} ::= sequence of alphanumeric characters and '_'
82 @var{LINKLABEL} ::= "[" @var{NAME} "]"
83 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
84 @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
85 @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
86 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
87 @var{FILTERGRAPH} ::= @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
90 @c man end FILTERGRAPH DESCRIPTION
92 @chapter Audio Filters
93 @c man begin AUDIO FILTERS
95 When you configure your FFmpeg build, you can disable any of the
96 existing filters using --disable-filters.
97 The configure output will show the audio filters included in your
100 Below is a description of the currently available audio filters.
104 Convert the input audio format to the specified formats.
106 The filter accepts a string of the form:
107 "@var{sample_format}:@var{channel_layout}:@var{packing_format}".
109 @var{sample_format} specifies the sample format, and can be a string or
110 the corresponding numeric value defined in @file{libavutil/samplefmt.h}.
112 @var{channel_layout} specifies the channel layout, and can be a string
113 or the corresponding number value defined in @file{libavutil/chlayout.h}.
115 @var{packing_format} specifies the type of packing in output, can be one
116 of "planar" or "packed", or the corresponding numeric values "0" or "1".
118 The special parameter "auto", signifies that the filter will
119 automatically select the output format depending on the output filter.
121 Some examples follow.
125 Convert input to unsigned 8-bit, stereo, packed:
127 aconvert=u8:stereo:packed
131 Convert input to unsigned 8-bit, automatically select out channel layout
134 aconvert=u8:auto:auto
140 Convert the input audio to one of the specified formats. The framework will
141 negotiate the most appropriate format to minimize conversions.
143 The filter accepts three lists of formats, separated by ":", in the form:
144 "@var{sample_formats}:@var{channel_layouts}:@var{packing_formats}".
146 Elements in each list are separated by "," which has to be escaped in the
147 filtergraph specification.
149 The special parameter "all", in place of a list of elements, signifies all
152 Some examples follow:
154 aformat=u8\\,s16:mono:packed
156 aformat=s16:mono\\,stereo:all
161 Pass the audio source unchanged to the output.
165 Resample the input audio to the specified sample rate.
167 The filter accepts exactly one parameter, the output sample rate. If not
168 specified then the filter will automatically convert between its input
169 and output sample rates.
171 For example, to resample the input audio to 44100Hz:
178 Show a line containing various information for each input audio frame.
179 The input audio is not modified.
181 The shown line contains a sequence of key/value pairs of the form
182 @var{key}:@var{value}.
184 A description of each shown parameter follows:
188 sequential number of the input frame, starting from 0
191 presentation TimeStamp of the input frame, expressed as a number of
192 time base units. The time base unit depends on the filter input pad, and
193 is usually 1/@var{sample_rate}.
196 presentation TimeStamp of the input frame, expressed as a number of
200 position of the frame in the input stream, -1 if this information in
201 unavailable and/or meanigless (for example in case of synthetic audio)
207 channel layout description
210 number of samples (per each channel) contained in the filtered frame
213 sample rate for the audio frame
216 if the packing format is planar, 0 if packed
219 Adler-32 checksum of all the planes of the input frame
222 Adler-32 checksum for each input frame plane, expressed in the form
223 "[@var{c0} @var{c1} @var{c2} @var{c3} @var{c4} @var{c5} @var{c6} @var{c7}]"
226 @c man end AUDIO FILTERS
228 @chapter Audio Sources
229 @c man begin AUDIO SOURCES
231 Below is a description of the currently available audio sources.
235 Buffer audio frames, and make them available to the filter chain.
237 This source is mainly intended for a programmatic use, in particular
238 through the interface defined in @file{libavfilter/asrc_abuffer.h}.
240 It accepts the following mandatory parameters:
241 @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}:@var{packing}
246 The sample rate of the incoming audio buffers.
249 The sample format of the incoming audio buffers.
250 Either a sample format name or its corresponging integer representation from
251 the enum AVSampleFormat in @file{libavutil/samplefmt.h}
254 The channel layout of the incoming audio buffers.
255 Either a channel layout name from channel_layout_map in
256 @file{libavutil/audioconvert.c} or its corresponding integer representation
257 from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
260 Either "packed" or "planar", or their integer representation: 0 or 1
267 abuffer=44100:s16:stereo:planar
270 will instruct the source to accept planar 16bit signed stereo at 44100Hz.
271 Since the sample format with name "s16" corresponds to the number
272 1 and the "stereo" channel layout corresponds to the value 3, this is
280 Read an audio stream from a movie container.
282 It accepts the syntax: @var{movie_name}[:@var{options}] where
283 @var{movie_name} is the name of the resource to read (not necessarily
284 a file but also a device or a stream accessed through some protocol),
285 and @var{options} is an optional sequence of @var{key}=@var{value}
286 pairs, separated by ":".
288 The description of the accepted options follows.
293 Specify the format assumed for the movie to read, and can be either
294 the name of a container or an input device. If not specified the
295 format is guessed from @var{movie_name} or by probing.
298 Specify the seek point in seconds, the frames will be output
299 starting from this seek point, the parameter is evaluated with
300 @code{av_strtod} so the numerical value may be suffixed by an IS
301 postfix. Default value is "0".
303 @item stream_index, si
304 Specify the index of the audio stream to read. If the value is -1,
305 the best suited audio stream will be automatically selected. Default
312 Null audio source, return unprocessed audio frames. It is mainly useful
313 as a template and to be employed in analysis / debugging tools, or as
314 the source for filters which ignore the input data (for example the sox
317 It accepts an optional sequence of @var{key}=@var{value} pairs,
320 The description of the accepted options follows.
325 Specify the sample rate, and defaults to 44100.
327 @item channel_layout, cl
329 Specify the channel layout, and can be either an integer or a string
330 representing a channel layout. The default value of @var{channel_layout}
333 Check the channel_layout_map definition in
334 @file{libavcodec/audioconvert.c} for the mapping between strings and
335 channel layout values.
338 Set the number of samples per requested frames.
342 Follow some examples:
344 # set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
345 anullsrc=r=48000:cl=4
348 anullsrc=r=48000:cl=mono
351 @c man end AUDIO SOURCES
354 @c man begin AUDIO SINKS
356 Below is a description of the currently available audio sinks.
360 Buffer audio frames, and make them available to the end of filter chain.
362 This sink is mainly intended for programmatic use, in particular
363 through the interface defined in @file{libavfilter/buffersink.h}.
365 It requires a pointer to an AVABufferSinkContext structure, which
366 defines the incoming buffers' formats, to be passed as the opaque
367 parameter to @code{avfilter_init_filter} for initialization.
371 Null audio sink, do absolutely nothing with the input audio. It is
372 mainly useful as a template and to be employed in analysis / debugging
375 @c man end AUDIO SINKS
377 @chapter Video Filters
378 @c man begin VIDEO FILTERS
380 When you configure your FFmpeg build, you can disable any of the
381 existing filters using --disable-filters.
382 The configure output will show the video filters included in your
385 Below is a description of the currently available video filters.
389 Detect frames that are (almost) completely black. Can be useful to
390 detect chapter transitions or commercials. Output lines consist of
391 the frame number of the detected frame, the percentage of blackness,
392 the position in the file if known or -1 and the timestamp in seconds.
394 In order to display the output lines, you need to set the loglevel at
395 least to the AV_LOG_INFO value.
397 The filter accepts the syntax:
399 blackframe[=@var{amount}:[@var{threshold}]]
402 @var{amount} is the percentage of the pixels that have to be below the
403 threshold, and defaults to 98.
405 @var{threshold} is the threshold below which a pixel value is
406 considered black, and defaults to 32.
410 Apply boxblur algorithm to the input video.
412 This filter accepts the parameters:
413 @var{luma_radius}:@var{luma_power}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power}
415 Chroma and alpha parameters are optional, if not specified they default
416 to the corresponding values set for @var{luma_radius} and
419 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
420 the radius in pixels of the box used for blurring the corresponding
421 input plane. They are expressions, and can contain the following
425 the input width and heigth in pixels
428 the input chroma image width and height in pixels
431 horizontal and vertical chroma subsample values. For example for the
432 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
435 The radius must be a non-negative number, and must be not greater than
436 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
437 and of @code{min(cw,ch)/2} for the chroma planes.
439 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
440 how many times the boxblur filter is applied to the corresponding
443 Some examples follow:
448 Apply a boxblur filter with luma, chroma, and alpha radius
455 Set luma radius to 2, alpha and chroma radius to 0
461 Set luma and chroma radius to a fraction of the video dimension
463 boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1
470 Copy the input source unchanged to the output. Mainly useful for
475 Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}.
477 The parameters are expressions containing the following constants:
481 the corresponding mathematical approximated values for e
482 (euler number), pi (greek PI), PHI (golden ratio)
485 the computed values for @var{x} and @var{y}. They are evaluated for
489 the input width and height
492 same as @var{in_w} and @var{in_h}
495 the output (cropped) width and height
498 same as @var{out_w} and @var{out_h}
501 same as @var{iw} / @var{ih}
504 input sample aspect ratio
507 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
510 horizontal and vertical chroma subsample values. For example for the
511 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
514 the number of input frame, starting from 0
517 the position in the file of the input frame, NAN if unknown
520 timestamp expressed in seconds, NAN if the input timestamp is unknown
524 The @var{out_w} and @var{out_h} parameters specify the expressions for
525 the width and height of the output (cropped) video. They are
526 evaluated just at the configuration of the filter.
528 The default value of @var{out_w} is "in_w", and the default value of
529 @var{out_h} is "in_h".
531 The expression for @var{out_w} may depend on the value of @var{out_h},
532 and the expression for @var{out_h} may depend on @var{out_w}, but they
533 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
534 evaluated after @var{out_w} and @var{out_h}.
536 The @var{x} and @var{y} parameters specify the expressions for the
537 position of the top-left corner of the output (non-cropped) area. They
538 are evaluated for each frame. If the evaluated value is not valid, it
539 is approximated to the nearest valid value.
541 The default value of @var{x} is "(in_w-out_w)/2", and the default
542 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
543 the center of the input image.
545 The expression for @var{x} may depend on @var{y}, and the expression
546 for @var{y} may depend on @var{x}.
548 Follow some examples:
550 # crop the central input area with size 100x100
553 # crop the central input area with size 2/3 of the input video
554 "crop=2/3*in_w:2/3*in_h"
556 # crop the input video central square
559 # delimit the rectangle with the top-left corner placed at position
560 # 100:100 and the right-bottom corner corresponding to the right-bottom
561 # corner of the input image.
562 crop=in_w-100:in_h-100:100:100
564 # crop 10 pixels from the left and right borders, and 20 pixels from
565 # the top and bottom borders
566 "crop=in_w-2*10:in_h-2*20"
568 # keep only the bottom right quarter of the input image
569 "crop=in_w/2:in_h/2:in_w/2:in_h/2"
571 # crop height for getting Greek harmony
572 "crop=in_w:1/PHI*in_w"
575 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
577 # erratic camera effect depending on timestamp
578 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
580 # set x depending on the value of y
581 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
586 Auto-detect crop size.
588 Calculate necessary cropping parameters and prints the recommended
589 parameters through the logging system. The detected dimensions
590 correspond to the non-black area of the input video.
592 It accepts the syntax:
594 cropdetect[=@var{limit}[:@var{round}[:@var{reset}]]]
600 Threshold, which can be optionally specified from nothing (0) to
601 everything (255), defaults to 24.
604 Value which the width/height should be divisible by, defaults to
605 16. The offset is automatically adjusted to center the video. Use 2 to
606 get only even dimensions (needed for 4:2:2 video). 16 is best when
607 encoding to most video codecs.
610 Counter that determines after how many frames cropdetect will reset
611 the previously detected largest video area and start over to detect
612 the current optimal crop area. Defaults to 0.
614 This can be useful when channel logos distort the video area. 0
615 indicates never reset and return the largest area encountered during
621 Suppress a TV station logo by a simple interpolation of the surrounding
622 pixels. Just set a rectangle covering the logo and watch it disappear
623 (and sometimes something even uglier appear - your mileage may vary).
625 The filter accepts parameters as a string of the form
626 "@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of
627 @var{key}=@var{value} pairs, separated by ":".
629 The description of the accepted parameters follows.
634 Specify the top left corner coordinates of the logo. They must be
638 Specify the width and height of the logo to clear. They must be
642 Specify the thickness of the fuzzy edge of the rectangle (added to
643 @var{w} and @var{h}). The default value is 4.
646 When set to 1, a green rectangle is drawn on the screen to simplify
647 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
648 @var{band} is set to 4. The default value is 0.
652 Some examples follow.
657 Set a rectangle covering the area with top left corner coordinates 0,0
658 and size 100x77, setting a band of size 10:
664 As the previous example, but use named options:
666 delogo=x=0:y=0:w=100:h=77:band=10
673 Draw a colored box on the input image.
675 It accepts the syntax:
677 drawbox=@var{x}:@var{y}:@var{width}:@var{height}:@var{color}
683 Specify the top left corner coordinates of the box. Default to 0.
686 Specify the width and height of the box, if 0 they are interpreted as
687 the input width and height. Default to 0.
690 Specify the color of the box to write, it can be the name of a color
691 (case insensitive match) or a 0xRRGGBB[AA] sequence.
694 Follow some examples:
696 # draw a black box around the edge of the input image
699 # draw a box with color red and an opacity of 50%
700 drawbox=10:20:200:60:red@@0.5"
705 Draw text string or text from specified file on top of video using the
708 To enable compilation of this filter you need to configure FFmpeg with
709 @code{--enable-libfreetype}.
711 The filter also recognizes strftime() sequences in the provided text
712 and expands them accordingly. Check the documentation of strftime().
714 The filter accepts parameters as a list of @var{key}=@var{value} pairs,
717 The description of the accepted parameters follows.
722 The font file to be used for drawing text. Path must be included.
723 This parameter is mandatory.
726 The text string to be drawn. The text must be a sequence of UTF-8
728 This parameter is mandatory if no file is specified with the parameter
732 A text file containing text to be drawn. The text must be a sequence
733 of UTF-8 encoded characters.
735 This parameter is mandatory if no text string is specified with the
736 parameter @var{text}.
738 If both text and textfile are specified, an error is thrown.
741 The expressions which specify the offsets where text will be drawn
742 within the video frame. They are relative to the top/left border of the
745 The default value of @var{x} and @var{y} is "0".
747 See below for the list of accepted constants.
750 The font size to be used for drawing text.
751 The default value of @var{fontsize} is 16.
754 The color to be used for drawing fonts.
755 Either a string (e.g. "red") or in 0xRRGGBB[AA] format
756 (e.g. "0xff000033"), possibly followed by an alpha specifier.
757 The default value of @var{fontcolor} is "black".
760 The color to be used for drawing box around text.
761 Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
762 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
763 The default value of @var{boxcolor} is "white".
766 Used to draw a box around text using background color.
767 Value should be either 1 (enable) or 0 (disable).
768 The default value of @var{box} is 0.
770 @item shadowx, shadowy
771 The x and y offsets for the text shadow position with respect to the
772 position of the text. They can be either positive or negative
773 values. Default value for both is "0".
776 The color to be used for drawing a shadow behind the drawn text. It
777 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
778 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
779 The default value of @var{shadowcolor} is "black".
782 Flags to be used for loading the fonts.
784 The flags map the corresponding flags supported by libfreetype, and are
785 a combination of the following values:
792 @item vertical_layout
796 @item ignore_global_advance_width
798 @item ignore_transform
805 Default value is "render".
807 For more information consult the documentation for the FT_LOAD_*
811 The size in number of spaces to use for rendering the tab.
815 The parameters for @var{x} and @var{y} are expressions containing the
820 the corresponding mathematical approximated values for e
821 (euler number), pi (greek PI), PHI (golden ratio)
824 the input width and heigth
827 the width of the rendered text
830 the height of the rendered text
833 the height of each text line
836 input sample aspect ratio
839 input display aspect ratio, it is the same as (@var{w} / @var{h}) * @var{sar}
842 horizontal and vertical chroma subsample values. For example for the
843 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
846 maximum glyph width, that is the maximum width for all the glyphs
847 contained in the rendered text
850 maximum glyph height, that is the maximum height for all the glyphs
851 contained in the rendered text, it is equivalent to @var{ascent} -
854 @item max_glyph_a, ascent
856 the maximum distance from the baseline to the highest/upper grid
857 coordinate used to place a glyph outline point, for all the rendered
859 It is a positive value, due to the grid's orientation with the Y axis
862 @item max_glyph_d, descent
863 the maximum distance from the baseline to the lowest grid coordinate
864 used to place a glyph outline point, for all the rendered glyphs.
865 This is a negative value, due to the grid's orientation, with the Y axis
869 the number of input frame, starting from 0
872 timestamp expressed in seconds, NAN if the input timestamp is unknown
875 Some examples follow.
880 Draw "Test Text" with font FreeSerif, using the default values for the
884 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
888 Draw 'Test Text' with font FreeSerif of size 24 at position x=100
889 and y=50 (counting from the top-left corner of the screen), text is
890 yellow with a red box around it. Both the text and the box have an
894 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
895 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
898 Note that the double quotes are not necessary if spaces are not used
899 within the parameter list.
902 Show the text at the center of the video frame:
904 drawtext=fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h-line_h)/2"
908 Show a text line sliding from right to left in the last row of the video
909 frame. The file @file{LONG_LINE} is assumed to contain a single line
912 drawtext=fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t
916 Show the content of file @file{CREDITS} off the bottom of the frame and scroll up.
918 drawtext=fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
922 Draw a single green letter "g", at the center of the input video.
923 The glyph baseline is placed at half screen height.
925 drawtext=fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent
930 For more information about libfreetype, check:
931 @url{http://www.freetype.org/}.
935 Apply fade-in/out effect to input video.
937 It accepts the parameters:
938 @var{type}:@var{start_frame}:@var{nb_frames}
940 @var{type} specifies if the effect type, can be either "in" for
941 fade-in, or "out" for a fade-out effect.
943 @var{start_frame} specifies the number of the start frame for starting
944 to apply the fade effect.
946 @var{nb_frames} specifies the number of frames for which the fade
947 effect has to last. At the end of the fade-in effect the output video
948 will have the same intensity as the input video, at the end of the
949 fade-out transition the output video will be completely black.
951 A few usage examples follow, usable too as test scenarios.
953 # fade in first 30 frames of video
956 # fade out last 45 frames of a 200-frame video
959 # fade in first 25 frames and fade out last 25 frames of a 1000-frame video
960 fade=in:0:25, fade=out:975:25
962 # make first 5 frames black, then fade in from frame 5-24
968 Transform the field order of the input video.
970 It accepts one parameter which specifies the required field order that
971 the input interlaced video will be transformed to. The parameter can
972 assume one of the following values:
976 output bottom field first
978 output top field first
981 Default value is "tff".
983 Transformation is achieved by shifting the picture content up or down
984 by one line, and filling the remaining line with appropriate picture content.
985 This method is consistent with most broadcast field order converters.
987 If the input video is not flagged as being interlaced, or it is already
988 flagged as being of the required output field order then this filter does
989 not alter the incoming video.
991 This filter is very useful when converting to or from PAL DV material,
992 which is bottom field first.
996 ./ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
1001 Buffer input images and send them when they are requested.
1003 This filter is mainly useful when auto-inserted by the libavfilter
1006 The filter does not take parameters.
1010 Convert the input video to one of the specified pixel formats.
1011 Libavfilter will try to pick one that is supported for the input to
1014 The filter accepts a list of pixel format names, separated by ":",
1015 for example "yuv420p:monow:rgb24".
1017 Some examples follow:
1019 # convert the input video to the format "yuv420p"
1022 # convert the input video to any of the formats in the list
1023 format=yuv420p:yuv444p:yuv410p
1029 Apply a frei0r effect to the input video.
1031 To enable compilation of this filter you need to install the frei0r
1032 header and configure FFmpeg with --enable-frei0r.
1034 The filter supports the syntax:
1036 @var{filter_name}[@{:|=@}@var{param1}:@var{param2}:...:@var{paramN}]
1039 @var{filter_name} is the name to the frei0r effect to load. If the
1040 environment variable @env{FREI0R_PATH} is defined, the frei0r effect
1041 is searched in each one of the directories specified by the colon
1042 separated list in @env{FREIOR_PATH}, otherwise in the standard frei0r
1043 paths, which are in this order: @file{HOME/.frei0r-1/lib/},
1044 @file{/usr/local/lib/frei0r-1/}, @file{/usr/lib/frei0r-1/}.
1046 @var{param1}, @var{param2}, ... , @var{paramN} specify the parameters
1047 for the frei0r effect.
1049 A frei0r effect parameter can be a boolean (whose values are specified
1050 with "y" and "n"), a double, a color (specified by the syntax
1051 @var{R}/@var{G}/@var{B}, @var{R}, @var{G}, and @var{B} being float
1052 numbers from 0.0 to 1.0) or by an @code{av_parse_color()} color
1053 description), a position (specified by the syntax @var{X}/@var{Y},
1054 @var{X} and @var{Y} being float numbers) and a string.
1056 The number and kind of parameters depend on the loaded effect. If an
1057 effect parameter is not specified the default value is set.
1059 Some examples follow:
1061 # apply the distort0r effect, set the first two double parameters
1062 frei0r=distort0r:0.5:0.01
1064 # apply the colordistance effect, takes a color as first parameter
1065 frei0r=colordistance:0.2/0.3/0.4
1066 frei0r=colordistance:violet
1067 frei0r=colordistance:0x112233
1069 # apply the perspective effect, specify the top left and top right
1071 frei0r=perspective:0.2/0.2:0.8/0.2
1074 For more information see:
1075 @url{http://piksel.org/frei0r}
1079 Fix the banding artifacts that are sometimes introduced into nearly flat
1080 regions by truncation to 8bit colordepth.
1081 Interpolate the gradients that should go where the bands are, and
1084 This filter is designed for playback only. Do not use it prior to
1085 lossy compression, because compression tends to lose the dither and
1086 bring back the bands.
1088 The filter takes two optional parameters, separated by ':':
1089 @var{strength}:@var{radius}
1091 @var{strength} is the maximum amount by which the filter will change
1092 any one pixel. Also the threshold for detecting nearly flat
1093 regions. Acceptable values range from .51 to 255, default value is
1094 1.2, out-of-range values will be clipped to the valid range.
1096 @var{radius} is the neighborhood to fit the gradient to. A larger
1097 radius makes for smoother gradients, but also prevents the filter from
1098 modifying the pixels near detailed regions. Acceptable values are
1099 8-32, default value is 16, out-of-range values will be clipped to the
1103 # default parameters
1112 Flip the input video horizontally.
1114 For example to horizontally flip the video in input with
1117 ffmpeg -i in.avi -vf "hflip" out.avi
1122 High precision/quality 3d denoise filter. This filter aims to reduce
1123 image noise producing smooth images and making still images really
1124 still. It should enhance compressibility.
1126 It accepts the following optional parameters:
1127 @var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp}
1131 a non-negative float number which specifies spatial luma strength,
1134 @item chroma_spatial
1135 a non-negative float number which specifies spatial chroma strength,
1136 defaults to 3.0*@var{luma_spatial}/4.0
1139 a float number which specifies luma temporal strength, defaults to
1140 6.0*@var{luma_spatial}/4.0
1143 a float number which specifies chroma temporal strength, defaults to
1144 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
1147 @section lut, lutrgb, lutyuv
1149 Compute a look-up table for binding each pixel component input value
1150 to an output value, and apply it to input video.
1152 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
1153 to an RGB input video.
1155 These filters accept in input a ":"-separated list of options, which
1156 specify the expressions used for computing the lookup table for the
1157 corresponding pixel component values.
1159 The @var{lut} filter requires either YUV or RGB pixel formats in
1160 input, and accepts the options:
1162 @var{c0} (first pixel component)
1163 @var{c1} (second pixel component)
1164 @var{c2} (third pixel component)
1165 @var{c3} (fourth pixel component, corresponds to the alpha component)
1168 The exact component associated to each option depends on the format in
1171 The @var{lutrgb} filter requires RGB pixel formats in input, and
1172 accepts the options:
1174 @var{r} (red component)
1175 @var{g} (green component)
1176 @var{b} (blue component)
1177 @var{a} (alpha component)
1180 The @var{lutyuv} filter requires YUV pixel formats in input, and
1181 accepts the options:
1183 @var{y} (Y/luminance component)
1184 @var{u} (U/Cb component)
1185 @var{v} (V/Cr component)
1186 @var{a} (alpha component)
1189 The expressions can contain the following constants and functions:
1193 the corresponding mathematical approximated values for e
1194 (euler number), pi (greek PI), PHI (golden ratio)
1197 the input width and heigth
1200 input value for the pixel component
1203 the input value clipped in the @var{minval}-@var{maxval} range
1206 maximum value for the pixel component
1209 minimum value for the pixel component
1212 the negated value for the pixel component value clipped in the
1213 @var{minval}-@var{maxval} range , it corresponds to the expression
1214 "maxval-clipval+minval"
1217 the computed value in @var{val} clipped in the
1218 @var{minval}-@var{maxval} range
1220 @item gammaval(gamma)
1221 the computed gamma correction value of the pixel component value
1222 clipped in the @var{minval}-@var{maxval} range, corresponds to the
1224 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
1228 All expressions default to "val".
1230 Some examples follow:
1232 # negate input video
1233 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
1234 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
1236 # the above is the same as
1237 lutrgb="r=negval:g=negval:b=negval"
1238 lutyuv="y=negval:u=negval:v=negval"
1243 # remove chroma components, turns the video into a graytone image
1244 lutyuv="u=128:v=128"
1246 # apply a luma burning effect
1249 # remove green and blue components
1252 # set a constant alpha channel value on input
1253 format=rgba,lutrgb=a="maxval-minval/2"
1255 # correct luminance gamma by a 0.5 factor
1256 lutyuv=y=gammaval(0.5)
1261 Apply an MPlayer filter to the input video.
1263 This filter provides a wrapper around most of the filters of
1266 This wrapper is considered experimental. Some of the wrapped filters
1267 may not work properly and we may drop support for them, as they will
1268 be implemented natively into FFmpeg. Thus you should avoid
1269 depending on them when writing portable scripts.
1271 The filters accepts the parameters:
1272 @var{filter_name}[:=]@var{filter_params}
1274 @var{filter_name} is the name of a supported MPlayer filter,
1275 @var{filter_params} is a string containing the parameters accepted by
1278 The list of the currently supported filters follows:
1332 The parameter syntax and behavior for the listed filters are the same
1333 of the corresponding MPlayer filters. For detailed instructions check
1334 the "VIDEO FILTERS" section in the MPlayer manual.
1336 Some examples follow:
1338 # remove a logo by interpolating the surrounding pixels
1339 mp=delogo=200:200:80:20:1
1341 # adjust gamma, brightness, contrast
1344 # tweak hue and saturation
1348 See also mplayer(1), @url{http://www.mplayerhq.hu/}.
1354 This filter accepts an integer in input, if non-zero it negates the
1355 alpha component (if available). The default value in input is 0.
1359 Force libavfilter not to use any of the specified pixel formats for the
1360 input to the next filter.
1362 The filter accepts a list of pixel format names, separated by ":",
1363 for example "yuv420p:monow:rgb24".
1365 Some examples follow:
1367 # force libavfilter to use a format different from "yuv420p" for the
1368 # input to the vflip filter
1369 noformat=yuv420p,vflip
1371 # convert the input video to any of the formats not contained in the list
1372 noformat=yuv420p:yuv444p:yuv410p
1377 Pass the video source unchanged to the output.
1381 Apply video transform using libopencv.
1383 To enable this filter install libopencv library and headers and
1384 configure FFmpeg with --enable-libopencv.
1386 The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}.
1388 @var{filter_name} is the name of the libopencv filter to apply.
1390 @var{filter_params} specifies the parameters to pass to the libopencv
1391 filter. If not specified the default values are assumed.
1393 Refer to the official libopencv documentation for more precise
1395 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
1397 Follows the list of supported libopencv filters.
1402 Dilate an image by using a specific structuring element.
1403 This filter corresponds to the libopencv function @code{cvDilate}.
1405 It accepts the parameters: @var{struct_el}:@var{nb_iterations}.
1407 @var{struct_el} represents a structuring element, and has the syntax:
1408 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
1410 @var{cols} and @var{rows} represent the number of colums and rows of
1411 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
1412 point, and @var{shape} the shape for the structuring element, and
1413 can be one of the values "rect", "cross", "ellipse", "custom".
1415 If the value for @var{shape} is "custom", it must be followed by a
1416 string of the form "=@var{filename}". The file with name
1417 @var{filename} is assumed to represent a binary image, with each
1418 printable character corresponding to a bright pixel. When a custom
1419 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
1420 or columns and rows of the read file are assumed instead.
1422 The default value for @var{struct_el} is "3x3+0x0/rect".
1424 @var{nb_iterations} specifies the number of times the transform is
1425 applied to the image, and defaults to 1.
1427 Follow some example:
1429 # use the default values
1432 # dilate using a structuring element with a 5x5 cross, iterate two times
1433 ocv=dilate=5x5+2x2/cross:2
1435 # read the shape from the file diamond.shape, iterate two times
1436 # the file diamond.shape may contain a pattern of characters like this:
1442 # the specified cols and rows are ignored (but not the anchor point coordinates)
1443 ocv=0x0+2x2/custom=diamond.shape:2
1448 Erode an image by using a specific structuring element.
1449 This filter corresponds to the libopencv function @code{cvErode}.
1451 The filter accepts the parameters: @var{struct_el}:@var{nb_iterations},
1452 with the same syntax and semantics as the @ref{dilate} filter.
1456 Smooth the input video.
1458 The filter takes the following parameters:
1459 @var{type}:@var{param1}:@var{param2}:@var{param3}:@var{param4}.
1461 @var{type} is the type of smooth filter to apply, and can be one of
1462 the following values: "blur", "blur_no_scale", "median", "gaussian",
1463 "bilateral". The default value is "gaussian".
1465 @var{param1}, @var{param2}, @var{param3}, and @var{param4} are
1466 parameters whose meanings depend on smooth type. @var{param1} and
1467 @var{param2} accept integer positive values or 0, @var{param3} and
1468 @var{param4} accept float values.
1470 The default value for @var{param1} is 3, the default value for the
1471 other parameters is 0.
1473 These parameters correspond to the parameters assigned to the
1474 libopencv function @code{cvSmooth}.
1478 Overlay one video on top of another.
1480 It takes two inputs and one output, the first input is the "main"
1481 video on which the second input is overlayed.
1483 It accepts the parameters: @var{x}:@var{y}.
1485 @var{x} is the x coordinate of the overlayed video on the main video,
1486 @var{y} is the y coordinate. The parameters are expressions containing
1487 the following parameters:
1490 @item main_w, main_h
1491 main input width and height
1494 same as @var{main_w} and @var{main_h}
1496 @item overlay_w, overlay_h
1497 overlay input width and height
1500 same as @var{overlay_w} and @var{overlay_h}
1503 Be aware that frames are taken from each input video in timestamp
1504 order, hence, if their initial timestamps differ, it is a a good idea
1505 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
1506 have them begin in the same zero timestamp, as it does the example for
1507 the @var{movie} filter.
1509 Follow some examples:
1511 # draw the overlay at 10 pixels from the bottom right
1512 # corner of the main video.
1513 overlay=main_w-overlay_w-10:main_h-overlay_h-10
1515 # insert a transparent PNG logo in the bottom left corner of the input
1516 movie=logo.png [logo];
1517 [in][logo] overlay=10:main_h-overlay_h-10 [out]
1519 # insert 2 different transparent PNG logos (second logo on bottom
1521 movie=logo1.png [logo1];
1522 movie=logo2.png [logo2];
1523 [in][logo1] overlay=10:H-h-10 [in+logo1];
1524 [in+logo1][logo2] overlay=W-w-10:H-h-10 [out]
1526 # add a transparent color layer on top of the main video,
1527 # WxH specifies the size of the main input to the overlay filter
1528 color=red@.3:WxH [over]; [in][over] overlay [out]
1531 You can chain togheter more overlays but the efficiency of such
1532 approach is yet to be tested.
1536 Add paddings to the input image, and places the original input at the
1537 given coordinates @var{x}, @var{y}.
1539 It accepts the following parameters:
1540 @var{width}:@var{height}:@var{x}:@var{y}:@var{color}.
1542 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
1543 expressions containing the following constants:
1547 the corresponding mathematical approximated values for e
1548 (euler number), pi (greek PI), phi (golden ratio)
1551 the input video width and height
1554 same as @var{in_w} and @var{in_h}
1557 the output width and height, that is the size of the padded area as
1558 specified by the @var{width} and @var{height} expressions
1561 same as @var{out_w} and @var{out_h}
1564 x and y offsets as specified by the @var{x} and @var{y}
1565 expressions, or NAN if not yet specified
1568 same as @var{iw} / @var{ih}
1571 input sample aspect ratio
1574 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1577 horizontal and vertical chroma subsample values. For example for the
1578 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1581 Follows the description of the accepted parameters.
1586 Specify the size of the output image with the paddings added. If the
1587 value for @var{width} or @var{height} is 0, the corresponding input size
1588 is used for the output.
1590 The @var{width} expression can reference the value set by the
1591 @var{height} expression, and viceversa.
1593 The default value of @var{width} and @var{height} is 0.
1597 Specify the offsets where to place the input image in the padded area
1598 with respect to the top/left border of the output image.
1600 The @var{x} expression can reference the value set by the @var{y}
1601 expression, and viceversa.
1603 The default value of @var{x} and @var{y} is 0.
1607 Specify the color of the padded area, it can be the name of a color
1608 (case insensitive match) or a 0xRRGGBB[AA] sequence.
1610 The default value of @var{color} is "black".
1614 Some examples follow:
1617 # Add paddings with color "violet" to the input video. Output video
1618 # size is 640x480, the top-left corner of the input video is placed at
1620 pad=640:480:0:40:violet
1622 # pad the input to get an output with dimensions increased bt 3/2,
1623 # and put the input video at the center of the padded area
1624 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
1626 # pad the input to get a squared output with size equal to the maximum
1627 # value between the input width and height, and put the input video at
1628 # the center of the padded area
1629 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
1631 # pad the input to get a final w/h ratio of 16:9
1632 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
1634 # for anamorphic video, in order to set the output display aspect ratio,
1635 # it is necessary to use sar in the expression, according to the relation:
1636 # (ih * X / ih) * sar = output_dar
1637 # X = output_dar / sar
1638 pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
1640 # double output size and put the input video in the bottom-right
1641 # corner of the output padded area
1642 pad="2*iw:2*ih:ow-iw:oh-ih"
1645 @section pixdesctest
1647 Pixel format descriptor test filter, mainly useful for internal
1648 testing. The output video should be equal to the input video.
1652 format=monow, pixdesctest
1655 can be used to test the monowhite pixel format descriptor definition.
1659 Scale the input video to @var{width}:@var{height} and/or convert the image format.
1661 The parameters @var{width} and @var{height} are expressions containing
1662 the following constants:
1666 the corresponding mathematical approximated values for e
1667 (euler number), pi (greek PI), phi (golden ratio)
1670 the input width and height
1673 same as @var{in_w} and @var{in_h}
1676 the output (cropped) width and height
1679 same as @var{out_w} and @var{out_h}
1682 same as @var{iw} / @var{ih}
1685 input sample aspect ratio
1688 input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
1691 input sample aspect ratio
1694 horizontal and vertical chroma subsample values. For example for the
1695 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
1698 If the input image format is different from the format requested by
1699 the next filter, the scale filter will convert the input to the
1702 If the value for @var{width} or @var{height} is 0, the respective input
1703 size is used for the output.
1705 If the value for @var{width} or @var{height} is -1, the scale filter will
1706 use, for the respective output size, a value that maintains the aspect
1707 ratio of the input image.
1709 The default value of @var{width} and @var{height} is 0.
1711 Some examples follow:
1713 # scale the input video to a size of 200x100.
1716 # scale the input to 2x
1718 # the above is the same as
1721 # scale the input to half size
1724 # increase the width, and set the height to the same size
1727 # seek for Greek harmony
1731 # increase the height, and set the width to 3/2 of the height
1734 # increase the size, but make the size a multiple of the chroma
1735 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
1737 # increase the width to a maximum of 500 pixels, keep the same input aspect ratio
1738 scale='min(500\, iw*3/2):-1'
1742 Select frames to pass in output.
1744 It accepts in input an expression, which is evaluated for each input
1745 frame. If the expression is evaluated to a non-zero value, the frame
1746 is selected and passed to the output, otherwise it is discarded.
1748 The expression can contain the following constants:
1761 the sequential number of the filtered frame, starting from 0
1764 the sequential number of the selected frame, starting from 0
1766 @item prev_selected_n
1767 the sequential number of the last selected frame, NAN if undefined
1770 timebase of the input timestamps
1773 the PTS (Presentation TimeStamp) of the filtered video frame,
1774 expressed in @var{TB} units, NAN if undefined
1777 the PTS (Presentation TimeStamp) of the filtered video frame,
1778 expressed in seconds, NAN if undefined
1781 the PTS of the previously filtered video frame, NAN if undefined
1783 @item prev_selected_pts
1784 the PTS of the last previously filtered video frame, NAN if undefined
1786 @item prev_selected_t
1787 the PTS of the last previously selected video frame, NAN if undefined
1790 the PTS of the first video frame in the video, NAN if undefined
1793 the time of the first video frame in the video, NAN if undefined
1796 the type of the filtered frame, can assume one of the following
1808 @item interlace_type
1809 the frame interlace type, can assume one of the following values:
1812 the frame is progressive (not interlaced)
1814 the frame is top-field-first
1816 the frame is bottom-field-first
1820 1 if the filtered frame is a key-frame, 0 otherwise
1823 the position in the file of the filtered frame, -1 if the information
1824 is not available (e.g. for synthetic video)
1827 The default value of the select expression is "1".
1829 Some examples follow:
1832 # select all frames in input
1835 # the above is the same as:
1841 # select only I-frames
1842 select='eq(pict_type\,I)'
1844 # select one frame every 100
1845 select='not(mod(n\,100))'
1847 # select only frames contained in the 10-20 time interval
1848 select='gte(t\,10)*lte(t\,20)'
1850 # select only I frames contained in the 10-20 time interval
1851 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
1853 # select frames with a minimum distance of 10 seconds
1854 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
1860 Set the Display Aspect Ratio for the filter output video.
1862 This is done by changing the specified Sample (aka Pixel) Aspect
1863 Ratio, according to the following equation:
1864 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
1866 Keep in mind that this filter does not modify the pixel dimensions of
1867 the video frame. Also the display aspect ratio set by this filter may
1868 be changed by later filters in the filterchain, e.g. in case of
1869 scaling or if another "setdar" or a "setsar" filter is applied.
1871 The filter accepts a parameter string which represents the wanted
1872 display aspect ratio.
1873 The parameter can be a floating point number string, or an expression
1874 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
1875 numerator and denominator of the aspect ratio.
1876 If the parameter is not specified, it is assumed the value "0:1".
1878 For example to change the display aspect ratio to 16:9, specify:
1881 # the above is equivalent to
1885 See also the @ref{setsar} filter documentation.
1889 Change the PTS (presentation timestamp) of the input video frames.
1891 Accept in input an expression evaluated through the eval API, which
1892 can contain the following constants:
1896 the presentation timestamp in input
1908 the count of the input frame, starting from 0.
1911 the PTS of the first video frame
1914 tell if the current frame is interlaced
1917 original position in the file of the frame, or undefined if undefined
1918 for the current frame
1928 Some examples follow:
1931 # start counting PTS from zero
1943 # fixed rate 25 fps with some jitter
1944 setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
1950 Set the Sample (aka Pixel) Aspect Ratio for the filter output video.
1952 Note that as a consequence of the application of this filter, the
1953 output display aspect ratio will change according to the following
1955 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
1957 Keep in mind that the sample aspect ratio set by this filter may be
1958 changed by later filters in the filterchain, e.g. if another "setsar"
1959 or a "setdar" filter is applied.
1961 The filter accepts a parameter string which represents the wanted
1962 sample aspect ratio.
1963 The parameter can be a floating point number string, or an expression
1964 of the form @var{num}:@var{den}, where @var{num} and @var{den} are the
1965 numerator and denominator of the aspect ratio.
1966 If the parameter is not specified, it is assumed the value "0:1".
1968 For example to change the sample aspect ratio to 10:11, specify:
1975 Set the timebase to use for the output frames timestamps.
1976 It is mainly useful for testing timebase configuration.
1978 It accepts in input an arithmetic expression representing a rational.
1979 The expression can contain the constants "PI", "E", "PHI", "AVTB" (the
1980 default timebase), and "intb" (the input timebase).
1982 The default value for the input is "intb".
1984 Follow some examples.
1987 # set the timebase to 1/25
1990 # set the timebase to 1/10
1993 #set the timebase to 1001/1000
1996 #set the timebase to 2*intb
1999 #set the default timebase value
2005 Show a line containing various information for each input video frame.
2006 The input video is not modified.
2008 The shown line contains a sequence of key/value pairs of the form
2009 @var{key}:@var{value}.
2011 A description of each shown parameter follows:
2015 sequential number of the input frame, starting from 0
2018 Presentation TimeStamp of the input frame, expressed as a number of
2019 time base units. The time base unit depends on the filter input pad.
2022 Presentation TimeStamp of the input frame, expressed as a number of
2026 position of the frame in the input stream, -1 if this information in
2027 unavailable and/or meanigless (for example in case of synthetic video)
2033 sample aspect ratio of the input frame, expressed in the form
2037 size of the input frame, expressed in the form
2038 @var{width}x@var{height}
2041 interlaced mode ("P" for "progressive", "T" for top field first, "B"
2042 for bottom field first)
2045 1 if the frame is a key frame, 0 otherwise
2048 picture type of the input frame ("I" for an I-frame, "P" for a
2049 P-frame, "B" for a B-frame, "?" for unknown type).
2050 Check also the documentation of the @code{AVPictureType} enum and of
2051 the @code{av_get_picture_type_char} function defined in
2052 @file{libavutil/avutil.h}.
2055 Adler-32 checksum of all the planes of the input frame
2057 @item plane_checksum
2058 Adler-32 checksum of each plane of the input frame, expressed in the form
2059 "[@var{c0} @var{c1} @var{c2} @var{c3}]"
2064 Pass the images of input video on to next video filter as multiple
2068 ./ffmpeg -i in.avi -vf "slicify=32" out.avi
2071 The filter accepts the slice height as parameter. If the parameter is
2072 not specified it will use the default value of 16.
2074 Adding this in the beginning of filter chains should make filtering
2075 faster due to better use of the memory cache.
2079 Pass on the input video to two outputs. Both outputs are identical to
2084 [in] split [splitout1][splitout2];
2085 [splitout1] crop=100:100:0:0 [cropout];
2086 [splitout2] pad=200:200:100:100 [padout];
2089 will create two separate outputs from the same input, one cropped and
2094 Transpose rows with columns in the input video and optionally flip it.
2096 It accepts a parameter representing an integer, which can assume the
2101 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
2109 Rotate by 90 degrees clockwise, that is:
2117 Rotate by 90 degrees counterclockwise, that is:
2125 Rotate by 90 degrees clockwise and vertically flip, that is:
2135 Sharpen or blur the input video.
2137 It accepts the following parameters:
2138 @var{luma_msize_x}:@var{luma_msize_y}:@var{luma_amount}:@var{chroma_msize_x}:@var{chroma_msize_y}:@var{chroma_amount}
2140 Negative values for the amount will blur the input video, while positive
2141 values will sharpen. All parameters are optional and default to the
2142 equivalent of the string '5:5:1.0:5:5:0.0'.
2147 Set the luma matrix horizontal size. It can be an integer between 3
2148 and 13, default value is 5.
2151 Set the luma matrix vertical size. It can be an integer between 3
2152 and 13, default value is 5.
2155 Set the luma effect strength. It can be a float number between -2.0
2156 and 5.0, default value is 1.0.
2158 @item chroma_msize_x
2159 Set the chroma matrix horizontal size. It can be an integer between 3
2160 and 13, default value is 5.
2162 @item chroma_msize_y
2163 Set the chroma matrix vertical size. It can be an integer between 3
2164 and 13, default value is 5.
2167 Set the chroma effect strength. It can be a float number between -2.0
2168 and 5.0, default value is 0.0.
2173 # Strong luma sharpen effect parameters
2176 # Strong blur of both luma and chroma parameters
2177 unsharp=7:7:-2:7:7:-2
2179 # Use the default values with @command{ffmpeg}
2180 ./ffmpeg -i in.avi -vf "unsharp" out.mp4
2185 Flip the input video vertically.
2188 ./ffmpeg -i in.avi -vf "vflip" out.avi
2193 Deinterlace the input video ("yadif" means "yet another deinterlacing
2196 It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}.
2198 @var{mode} specifies the interlacing mode to adopt, accepts one of the
2203 output 1 frame for each frame
2205 output 1 frame for each field
2207 like 0 but skips spatial interlacing check
2209 like 1 but skips spatial interlacing check
2214 @var{parity} specifies the picture field parity assumed for the input
2215 interlaced video, accepts one of the following values:
2219 assume top field first
2221 assume bottom field first
2223 enable automatic detection
2226 Default value is -1.
2227 If interlacing is unknown or decoder does not export this information,
2228 top field first will be assumed.
2230 @var{auto} specifies if deinterlacer should trust the interlaced flag
2231 and only deinterlace frames marked as interlaced
2235 deinterlace all frames
2237 only deinterlace frames marked as interlaced
2242 @c man end VIDEO FILTERS
2244 @chapter Video Sources
2245 @c man begin VIDEO SOURCES
2247 Below is a description of the currently available video sources.
2251 Buffer video frames, and make them available to the filter chain.
2253 This source is mainly intended for a programmatic use, in particular
2254 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
2256 It accepts the following parameters:
2257 @var{width}:@var{height}:@var{pix_fmt_string}:@var{timebase_num}:@var{timebase_den}:@var{sample_aspect_ratio_num}:@var{sample_aspect_ratio.den}:@var{scale_params}
2259 All the parameters but @var{scale_params} need to be explicitely
2262 Follows the list of the accepted parameters.
2267 Specify the width and height of the buffered video frames.
2269 @item pix_fmt_string
2270 A string representing the pixel format of the buffered video frames.
2271 It may be a number corresponding to a pixel format, or a pixel format
2274 @item timebase_num, timebase_den
2275 Specify numerator and denomitor of the timebase assumed by the
2276 timestamps of the buffered frames.
2278 @item sample_aspect_ratio.num, sample_aspect_ratio.den
2279 Specify numerator and denominator of the sample aspect ratio assumed
2280 by the video frames.
2283 Specify the optional parameters to be used for the scale filter which
2284 is automatically inserted when an input change is detected in the
2285 input size or format.
2290 buffer=320:240:yuv410p:1:24:1:1
2293 will instruct the source to accept video frames with size 320x240 and
2294 with format "yuv410p", assuming 1/24 as the timestamps timebase and
2295 square pixels (1:1 sample aspect ratio).
2296 Since the pixel format with name "yuv410p" corresponds to the number 6
2297 (check the enum PixelFormat definition in @file{libavutil/pixfmt.h}),
2298 this example corresponds to:
2300 buffer=320:240:6:1:24:1:1
2305 Provide an uniformly colored input.
2307 It accepts the following parameters:
2308 @var{color}:@var{frame_size}:@var{frame_rate}
2310 Follows the description of the accepted parameters.
2315 Specify the color of the source. It can be the name of a color (case
2316 insensitive match) or a 0xRRGGBB[AA] sequence, possibly followed by an
2317 alpha specifier. The default value is "black".
2320 Specify the size of the sourced video, it may be a string of the form
2321 @var{width}x@var{height}, or the name of a size abbreviation. The
2322 default value is "320x240".
2325 Specify the frame rate of the sourced video, as the number of frames
2326 generated per second. It has to be a string in the format
2327 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2328 number or a valid video frame rate abbreviation. The default value is
2333 For example the following graph description will generate a red source
2334 with an opacity of 0.2, with size "qcif" and a frame rate of 10
2335 frames per second, which will be overlayed over the source connected
2336 to the pad with identifier "in".
2339 "color=red@@0.2:qcif:10 [color]; [in][color] overlay [out]"
2344 Read a video stream from a movie container.
2346 It accepts the syntax: @var{movie_name}[:@var{options}] where
2347 @var{movie_name} is the name of the resource to read (not necessarily
2348 a file but also a device or a stream accessed through some protocol),
2349 and @var{options} is an optional sequence of @var{key}=@var{value}
2350 pairs, separated by ":".
2352 The description of the accepted options follows.
2356 @item format_name, f
2357 Specifies the format assumed for the movie to read, and can be either
2358 the name of a container or an input device. If not specified the
2359 format is guessed from @var{movie_name} or by probing.
2361 @item seek_point, sp
2362 Specifies the seek point in seconds, the frames will be output
2363 starting from this seek point, the parameter is evaluated with
2364 @code{av_strtod} so the numerical value may be suffixed by an IS
2365 postfix. Default value is "0".
2367 @item stream_index, si
2368 Specifies the index of the video stream to read. If the value is -1,
2369 the best suited video stream will be automatically selected. Default
2374 This filter allows to overlay a second video on top of main input of
2375 a filtergraph as shown in this graph:
2377 input -----------> deltapts0 --> overlay --> output
2380 movie --> scale--> deltapts1 -------+
2383 Some examples follow:
2385 # skip 3.2 seconds from the start of the avi file in.avi, and overlay it
2386 # on top of the input labelled as "in".
2387 movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2388 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2390 # read from a video4linux2 device, and overlay it on top of the input
2392 movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
2393 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
2399 Generate various test patterns, as generated by the MPlayer test filter.
2401 The size of the generated video is fixed, and is 256x256.
2402 This source is useful in particular for testing encoding features.
2404 This source accepts an optional sequence of @var{key}=@var{value} pairs,
2405 separated by ":". The description of the accepted options follows.
2410 Specify the frame rate of the sourced video, as the number of frames
2411 generated per second. It has to be a string in the format
2412 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2413 number or a valid video frame rate abbreviation. The default value is
2417 Set the video duration of the sourced video. The accepted syntax is:
2419 [-]HH[:MM[:SS[.m...]]]
2422 See also the function @code{av_parse_time()}.
2424 If not specified, or the expressed duration is negative, the video is
2425 supposed to be generated forever.
2429 Set the number or the name of the test to perform. Supported tests are:
2444 Default value is "all", which will cycle through the list of all tests.
2447 For example the following:
2452 will generate a "dc_luma" test pattern.
2456 Null video source, never return images. It is mainly useful as a
2457 template and to be employed in analysis / debugging tools.
2459 It accepts as optional parameter a string of the form
2460 @var{width}:@var{height}:@var{timebase}.
2462 @var{width} and @var{height} specify the size of the configured
2463 source. The default values of @var{width} and @var{height} are
2464 respectively 352 and 288 (corresponding to the CIF size format).
2466 @var{timebase} specifies an arithmetic expression representing a
2467 timebase. The expression can contain the constants "PI", "E", "PHI",
2468 "AVTB" (the default timebase), and defaults to the value "AVTB".
2472 Provide a frei0r source.
2474 To enable compilation of this filter you need to install the frei0r
2475 header and configure FFmpeg with --enable-frei0r.
2477 The source supports the syntax:
2479 @var{size}:@var{rate}:@var{src_name}[@{=|:@}@var{param1}:@var{param2}:...:@var{paramN}]
2482 @var{size} is the size of the video to generate, may be a string of the
2483 form @var{width}x@var{height} or a frame size abbreviation.
2484 @var{rate} is the rate of the video to generate, may be a string of
2485 the form @var{num}/@var{den} or a frame rate abbreviation.
2486 @var{src_name} is the name to the frei0r source to load. For more
2487 information regarding frei0r and how to set the parameters read the
2488 section @ref{frei0r} in the description of the video filters.
2490 Some examples follow:
2492 # generate a frei0r partik0l source with size 200x200 and framerate 10
2493 # which is overlayed on the overlay filter main input
2494 frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay
2497 @section rgbtestsrc, testsrc
2499 The @code{rgbtestsrc} source generates an RGB test pattern useful for
2500 detecting RGB vs BGR issues. You should see a red, green and blue
2501 stripe from top to bottom.
2503 The @code{testsrc} source generates a test video pattern, showing a
2504 color pattern, a scrolling gradient and a timestamp. This is mainly
2505 intended for testing purposes.
2507 Both sources accept an optional sequence of @var{key}=@var{value} pairs,
2508 separated by ":". The description of the accepted options follows.
2513 Specify the size of the sourced video, it may be a string of the form
2514 @var{width}x@var{heigth}, or the name of a size abbreviation. The
2515 default value is "320x240".
2518 Specify the frame rate of the sourced video, as the number of frames
2519 generated per second. It has to be a string in the format
2520 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float
2521 number or a valid video frame rate abbreviation. The default value is
2525 Set the sample aspect ratio of the sourced video.
2528 Set the video duration of the sourced video. The accepted syntax is:
2530 [-]HH[:MM[:SS[.m...]]]
2533 See also the function @code{av_parse_time()}.
2535 If not specified, or the expressed duration is negative, the video is
2536 supposed to be generated forever.
2539 For example the following:
2541 testsrc=duration=5.3:size=qcif:rate=10
2544 will generate a video with a duration of 5.3 seconds, with size
2545 176x144 and a framerate of 10 frames per second.
2547 @c man end VIDEO SOURCES
2549 @chapter Video Sinks
2550 @c man begin VIDEO SINKS
2552 Below is a description of the currently available video sinks.
2556 Buffer video frames, and make them available to the end of the filter
2559 This sink is mainly intended for a programmatic use, in particular
2560 through the interface defined in @file{libavfilter/buffersink.h}.
2562 It does not require a string parameter in input, but you need to
2563 specify a pointer to a list of supported pixel formats terminated by
2564 -1 in the opaque parameter provided to @code{avfilter_init_filter}
2565 when initializing this sink.
2569 Null video sink, do absolutely nothing with the input video. It is
2570 mainly useful as a template and to be employed in analysis / debugging
2573 @c man end VIDEO SINKS