1 @chapter Filtergraph description
2 @c man begin FILTERGRAPH DESCRIPTION
4 A filtergraph is a directed graph of connected filters. It can contain
5 cycles, and there can be multiple links between a pair of
6 filters. Each link has one input pad on one side connecting it to one
7 filter from which it takes its input, and one output pad on the other
8 side connecting it to one filter accepting its output.
10 Each filter in a filtergraph is an instance of a filter class
11 registered in the application, which defines the features and the
12 number of input and output pads of the filter.
14 A filter with no input pads is called a "source", and a filter with no
15 output pads is called a "sink".
17 @anchor{Filtergraph syntax}
18 @section Filtergraph syntax
20 A filtergraph has a textual representation, which is
21 recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex}
22 options in @command{avconv} and @option{-vf} in @command{avplay}, and by the
23 @code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} functions defined in
24 @file{libavfilter/avfilter.h}.
26 A filterchain consists of a sequence of connected filters, each one
27 connected to the previous one in the sequence. A filterchain is
28 represented by a list of ","-separated filter descriptions.
30 A filtergraph consists of a sequence of filterchains. A sequence of
31 filterchains is represented by a list of ";"-separated filterchain
34 A filter is represented by a string of the form:
35 [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
37 @var{filter_name} is the name of the filter class of which the
38 described filter is an instance of, and has to be the name of one of
39 the filter classes registered in the program.
40 The name of the filter class is optionally followed by a string
43 @var{arguments} is a string which contains the parameters used to
44 initialize the filter instance. It may have one of two forms:
48 A ':'-separated list of @var{key=value} pairs.
51 A ':'-separated list of @var{value}. In this case, the keys are assumed to be
52 the option names in the order they are declared. E.g. the @code{fade} filter
53 declares three options in this order -- @option{type}, @option{start_frame} and
54 @option{nb_frames}. Then the parameter list @var{in:0:30} means that the value
55 @var{in} is assigned to the option @option{type}, @var{0} to
56 @option{start_frame} and @var{30} to @option{nb_frames}.
60 If the option value itself is a list of items (e.g. the @code{format} filter
61 takes a list of pixel formats), the items in the list are usually separated by
64 The list of arguments can be quoted using the character "'" as initial
65 and ending mark, and the character '\' for escaping the characters
66 within the quoted text; otherwise the argument string is considered
67 terminated when the next special character (belonging to the set
68 "[]=;,") is encountered.
70 The name and arguments of the filter are optionally preceded and
71 followed by a list of link labels.
72 A link label allows to name a link and associate it to a filter output
73 or input pad. The preceding labels @var{in_link_1}
74 ... @var{in_link_N}, are associated to the filter input pads,
75 the following labels @var{out_link_1} ... @var{out_link_M}, are
76 associated to the output pads.
78 When two link labels with the same name are found in the
79 filtergraph, a link between the corresponding input and output pad is
82 If an output pad is not labelled, it is linked by default to the first
83 unlabelled input pad of the next filter in the filterchain.
84 For example in the filterchain
86 nullsrc, split[L1], [L2]overlay, nullsink
88 the split filter instance has two output pads, and the overlay filter
89 instance two input pads. The first output pad of split is labelled
90 "L1", the first input pad of overlay is labelled "L2", and the second
91 output pad of split is linked to the second input pad of overlay,
92 which are both unlabelled.
94 In a complete filterchain all the unlabelled filter input and output
95 pads must be connected. A filtergraph is considered valid if all the
96 filter input and output pads of all the filterchains are connected.
98 Libavfilter will automatically insert @ref{scale} filters where format
99 conversion is required. It is possible to specify swscale flags
100 for those automatically inserted scalers by prepending
101 @code{sws_flags=@var{flags};}
102 to the filtergraph description.
104 Here is a BNF description of the filtergraph syntax:
106 @var{NAME} ::= sequence of alphanumeric characters and '_'
107 @var{LINKLABEL} ::= "[" @var{NAME} "]"
108 @var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
109 @var{FILTER_ARGUMENTS} ::= sequence of chars (possibly quoted)
110 @var{FILTER} ::= [@var{LINKLABELS}] @var{NAME} ["=" @var{FILTER_ARGUMENTS}] [@var{LINKLABELS}]
111 @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
112 @var{FILTERGRAPH} ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
115 @c man end FILTERGRAPH DESCRIPTION
117 @chapter Audio Filters
118 @c man begin AUDIO FILTERS
120 When you configure your Libav build, you can disable any of the
121 existing filters using --disable-filters.
122 The configure output will show the audio filters included in your
125 Below is a description of the currently available audio filters.
129 Convert the input audio to one of the specified formats. The framework will
130 negotiate the most appropriate format to minimize conversions.
132 It accepts the following parameters:
136 A '|'-separated list of requested sample formats.
139 A '|'-separated list of requested sample rates.
141 @item channel_layouts
142 A '|'-separated list of requested channel layouts.
146 If a parameter is omitted, all values are allowed.
148 Force the output to either unsigned 8-bit or signed 16-bit stereo
150 aformat=sample_fmts=u8|s16:channel_layouts=stereo
155 Mixes multiple audio inputs into a single output.
159 avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
161 will mix 3 input audio streams to a single output with the same duration as the
162 first input and a dropout transition time of 3 seconds.
164 It accepts the following parameters:
168 The number of inputs. If unspecified, it defaults to 2.
171 How to determine the end-of-stream.
175 The duration of the longest input. (default)
178 The duration of the shortest input.
181 The duration of the first input.
185 @item dropout_transition
186 The transition time, in seconds, for volume renormalization when an input
187 stream ends. The default value is 2 seconds.
193 Pass the audio source unchanged to the output.
197 Change the PTS (presentation timestamp) of the input audio frames.
199 It accepts the following parameters:
204 The expression which is evaluated for each frame to construct its timestamp.
208 The expression is evaluated through the eval API and can contain the following
213 frame rate, only defined for constant frame-rate video
216 the presentation timestamp in input
219 These are approximated values for the mathematical constants e
220 (Euler's number), pi (Greek pi), and phi (the golden ratio).
223 The number of audio samples passed through the filter so far, starting at 0.
226 The number of audio samples in the current frame.
229 The audio sample rate.
232 The PTS of the first frame.
235 The previous input PTS.
238 The previous output PTS.
241 The wallclock (RTC) time in microseconds.
244 The wallclock (RTC) time at the start of the movie in microseconds.
251 # Start counting PTS from zero
252 asetpts=expr=PTS-STARTPTS
254 # Generate timestamps by counting samples
257 # Generate timestamps from a "live source" and rebase onto the current timebase
258 asetpts='(RTCTIME - RTCSTART) / (TB * 1000000)"
263 Set the timebase to use for the output frames timestamps.
264 It is mainly useful for testing timebase configuration.
266 This filter accepts the following parameters:
271 The expression which is evaluated into the output timebase.
275 The expression can contain the constants @var{PI}, @var{E}, @var{PHI}, @var{AVTB} (the
276 default timebase), @var{intb} (the input timebase), and @var{sr} (the sample rate,
279 The default value for the input is @var{intb}.
284 # Set the timebase to 1/25:
287 # Set the timebase to 1/10:
290 # Set the timebase to 1001/1000:
293 # Set the timebase to 2*intb:
296 # Set the default timebase value:
299 # Set the timebase to twice the sample rate:
305 Show a line containing various information for each input audio frame.
306 The input audio is not modified.
308 The shown line contains a sequence of key/value pairs of the form
309 @var{key}:@var{value}.
311 It accepts the following parameters:
315 The (sequential) number of the input frame, starting from 0.
318 The presentation timestamp of the input frame, in time base units; the time base
319 depends on the filter input pad, and is usually 1/@var{sample_rate}.
322 The presentation timestamp of the input frame in seconds.
331 The sample rate for the audio frame.
334 The number of samples (per channel) in the frame.
337 The Adler-32 checksum (printed in hexadecimal) of the audio data. For planar
338 audio, the data is treated as if all the planes were concatenated.
340 @item plane_checksums
341 A list of Adler-32 checksums for each data plane.
346 Split input audio into several identical outputs.
348 It accepts a single parameter, which specifies the number of outputs. If
349 unspecified, it defaults to 2.
353 avconv -i INPUT -filter_complex asplit=5 OUTPUT
355 will create 5 copies of the input audio.
358 Synchronize audio data with timestamps by squeezing/stretching it and/or
359 dropping samples/adding silence when needed.
361 It accepts the following parameters:
365 Enable stretching/squeezing the data to make it match the timestamps. Disabled
366 by default. When disabled, time gaps are covered with silence.
369 The minimum difference between timestamps and audio data (in seconds) to trigger
370 adding/dropping samples. The default value is 0.1. If you get an imperfect
371 sync with this filter, try setting this parameter to 0.
374 The maximum compensation in samples per second. Only relevant with compensate=1.
375 The default value is 500.
378 Assume that the first PTS should be this value. The time base is 1 / sample
379 rate. This allows for padding/trimming at the start of the stream. By default,
380 no assumption is made about the first frame's expected PTS, so no padding or
381 trimming is done. For example, this could be set to 0 to pad the beginning with
382 silence if an audio stream starts after the video stream or to trim any samples
383 with a negative PTS due to encoder delay.
388 Trim the input so that the output contains one continuous subpart of the input.
390 It accepts the following parameters:
393 Timestamp (in seconds) of the start of the section to keep. I.e. the audio
394 sample with the timestamp @var{start} will be the first sample in the output.
397 Timestamp (in seconds) of the first audio sample that will be dropped. I.e. the
398 audio sample immediately preceding the one with the timestamp @var{end} will be
399 the last sample in the output.
402 Same as @var{start}, except this option sets the start timestamp in samples
406 Same as @var{end}, except this option sets the end timestamp in samples instead
410 The maximum duration of the output in seconds.
413 The number of the first sample that should be output.
416 The number of the first sample that should be dropped.
419 Note that the first two sets of the start/end options and the @option{duration}
420 option look at the frame timestamp, while the _sample options simply count the
421 samples that pass through the filter. So start/end_pts and start/end_sample will
422 give different results when the timestamps are wrong, inexact or do not start at
423 zero. Also note that this filter does not modify the timestamps. If you wish
424 to have the output timestamps start at zero, insert the asetpts filter after the
427 If multiple start or end options are set, this filter tries to be greedy and
428 keep all samples that match at least one of the specified constraints. To keep
429 only the part that matches all the constraints at once, chain multiple atrim
432 The defaults are such that all the input is kept. So it is possible to set e.g.
433 just the end values to keep everything before the specified time.
438 Drop everything except the second minute of input:
440 avconv -i INPUT -af atrim=60:120
444 Keep only the first 1000 samples:
446 avconv -i INPUT -af atrim=end_sample=1000
452 Bauer stereo to binaural transformation, which improves headphone listening of
453 stereo audio records.
455 It accepts the following parameters:
459 Pre-defined crossfeed level.
463 Default level (fcut=700, feed=50).
466 Chu Moy circuit (fcut=700, feed=60).
469 Jan Meier circuit (fcut=650, feed=95).
474 Cut frequency (in Hz).
481 @section channelsplit
482 Split each channel from an input audio stream into a separate output stream.
484 It accepts the following parameters:
487 The channel layout of the input stream. The default is "stereo".
490 For example, assuming a stereo input MP3 file,
492 avconv -i in.mp3 -filter_complex channelsplit out.mkv
494 will create an output Matroska file with two audio streams, one containing only
495 the left channel and the other the right channel.
497 Split a 5.1 WAV file into per-channel files:
499 avconv -i in.wav -filter_complex
500 'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
501 -map '[FL]' front_left.wav -map '[FR]' front_right.wav
502 -map '[FC]' front_center.wav -map '[LFE]' low_frequency_effects.wav
503 -map '[SL]' side_left.wav -map '[SR]' side_right.wav
507 Remap input channels to new locations.
509 It accepts the following parameters:
512 The channel layout of the output stream.
515 Map channels from input to output. The argument is a '|'-separated list of
516 mappings, each in the @code{@var{in_channel}-@var{out_channel}} or
517 @var{in_channel} form. @var{in_channel} can be either the name of the input
518 channel (e.g. FL for front left) or its index in the input channel layout.
519 @var{out_channel} is the name of the output channel or its index in the output
520 channel layout. If @var{out_channel} is not given then it is implicitly an
521 index, starting with zero and increasing by one for each mapping.
524 If no mapping is present, the filter will implicitly map input channels to
525 output channels, preserving indices.
527 For example, assuming a 5.1+downmix input MOV file,
529 avconv -i in.mov -filter 'channelmap=map=DL-FL|DR-FR' out.wav
531 will create an output WAV file tagged as stereo from the downmix channels of
534 To fix a 5.1 WAV improperly encoded in AAC's native channel order
536 avconv -i in.wav -filter 'channelmap=1|2|0|5|3|4:5.1' out.wav
540 Compress or expand the audio's dynamic range.
542 It accepts the following parameters:
548 A list of times in seconds for each channel over which the instantaneous level
549 of the input signal is averaged to determine its volume. @var{attacks} refers to
550 increase of volume and @var{decays} refers to decrease of volume. For most
551 situations, the attack time (response to the audio getting louder) should be
552 shorter than the decay time, because the human ear is more sensitive to sudden
553 loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and
554 a typical value for decay is 0.8 seconds.
557 A list of points for the transfer function, specified in dB relative to the
558 maximum possible signal amplitude. Each key points list must be defined using
559 the following syntax: @code{x0/y0|x1/y1|x2/y2|....}
561 The input values must be in strictly increasing order but the transfer function
562 does not have to be monotonically rising. The point @code{0/0} is assumed but
563 may be overridden (by @code{0/out-dBn}). Typical values for the transfer
564 function are @code{-70/-70|-60/-20}.
567 Set the curve radius in dB for all joints. It defaults to 0.01.
570 Set the additional gain in dB to be applied at all points on the transfer
571 function. This allows for easy adjustment of the overall gain.
575 Set an initial volume, in dB, to be assumed for each channel when filtering
576 starts. This permits the user to supply a nominal level initially, so that, for
577 example, a very large gain is not applied to initial signal levels before the
578 companding has begun to operate. A typical value for audio which is initially
579 quiet is -90 dB. It defaults to 0.
582 Set a delay, in seconds. The input audio is analyzed immediately, but audio is
583 delayed before being fed to the volume adjuster. Specifying a delay
584 approximately equal to the attack/decay times allows the filter to effectively
585 operate in predictive rather than reactive mode. It defaults to 0.
593 Make music with both quiet and loud passages suitable for listening to in a
596 compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2
600 A noise gate for when the noise is at a lower level than the signal:
602 compand=.1|.1:.2|.2:-900/-900|-50.1/-900|-50/-50:.01:0:-90:.1
606 Here is another noise gate, this time for when the noise is at a higher level
607 than the signal (making it, in some ways, similar to squelch):
609 compand=.1|.1:.1|.1:-45.1/-45.1|-45/-900|0/-900:.01:45:-90:.1
614 Join multiple input streams into one multi-channel stream.
616 It accepts the following parameters:
620 The number of input streams. It defaults to 2.
623 The desired output channel layout. It defaults to stereo.
626 Map channels from inputs to output. The argument is a '|'-separated list of
627 mappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}}
628 form. @var{input_idx} is the 0-based index of the input stream. @var{in_channel}
629 can be either the name of the input channel (e.g. FL for front left) or its
630 index in the specified input stream. @var{out_channel} is the name of the output
634 The filter will attempt to guess the mappings when they are not specified
635 explicitly. It does so by first trying to find an unused matching input channel
636 and if that fails it picks the first unused input channel.
638 Join 3 inputs (with properly set channel layouts):
640 avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT
643 Build a 5.1 output from 6 single-channel streams:
645 avconv -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex
646 'join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-SL|4.0-SR|5.0-LFE'
652 Decodes High Definition Compatible Digital (HDCD) data. A 16-bit PCM stream with
653 embedded HDCD codes is expanded into a 20-bit PCM stream.
655 The filter supports the Peak Extend and Low-level Gain Adjustment features
656 of HDCD, and detects the Transient Filter flag.
659 avconv -i HDCD16.flac -af hdcd OUT24.flac
662 When using the filter with WAV, note that the default encoding for WAV is 16-bit,
663 so the resulting 20-bit stream will be truncated back to 16-bit. Use something
664 like @command{-c:a pcm_s24le} after the filter to get 24-bit PCM output.
666 avconv -i HDCD16.wav -af hdcd OUT16.wav
667 avconv -i HDCD16.wav -af hdcd -c:a pcm_s24le OUT24.wav
670 The filter accepts the following options:
674 Replace audio with a solid tone and adjust the amplitude to signal some
675 specific aspect of the decoding process. The output file can be loaded in
676 an audio editor alongside the original to aid analysis.
683 Gain adjustment level at each sample
685 Samples where peak extend occurs
687 Samples where the code detect timer is active
689 Samples where the target gain does not match between channels
691 Any samples above peak extend level
693 Gain adjustment level at each sample, in each channel
698 Convert the audio sample format, sample rate and channel layout. It is
699 not meant to be used directly; it is inserted automatically by libavfilter
700 whenever conversion is needed. Use the @var{aformat} filter to force a specific
705 Adjust the input audio volume.
707 It accepts the following parameters:
711 This expresses how the audio volume will be increased or decreased.
713 Output values are clipped to the maximum value.
715 The output audio volume is given by the relation:
717 @var{output_volume} = @var{volume} * @var{input_volume}
720 The default value for @var{volume} is 1.0.
723 This parameter represents the mathematical precision.
725 It determines which input sample formats will be allowed, which affects the
726 precision of the volume scaling.
730 8-bit fixed-point; this limits input sample format to U8, S16, and S32.
732 32-bit floating-point; this limits input sample format to FLT. (default)
734 64-bit floating-point; this limits input sample format to DBL.
738 Choose the behaviour on encountering ReplayGain side data in input frames.
742 Remove ReplayGain side data, ignoring its contents (the default).
745 Ignore ReplayGain side data, but leave it in the frame.
748 Prefer the track gain, if present.
751 Prefer the album gain, if present.
754 @item replaygain_preamp
755 Pre-amplification gain in dB to apply to the selected replaygain gain.
757 Default value for @var{replaygain_preamp} is 0.0.
759 @item replaygain_noclip
760 Prevent clipping by limiting the gain applied.
762 Default value for @var{replaygain_noclip} is 1.
770 Halve the input audio volume:
774 volume=volume=-6.0206dB
778 Increase input audio power by 6 decibels using fixed-point precision:
780 volume=volume=6dB:precision=fixed
784 @c man end AUDIO FILTERS
786 @chapter Audio Sources
787 @c man begin AUDIO SOURCES
789 Below is a description of the currently available audio sources.
793 The null audio source; it never returns audio frames. It is mainly useful as a
794 template and for use in analysis / debugging tools.
796 It accepts, as an optional parameter, a string of the form
797 @var{sample_rate}:@var{channel_layout}.
799 @var{sample_rate} specifies the sample rate, and defaults to 44100.
801 @var{channel_layout} specifies the channel layout, and can be either an
802 integer or a string representing a channel layout. The default value
803 of @var{channel_layout} is 3, which corresponds to CH_LAYOUT_STEREO.
805 Check the channel_layout_map definition in
806 @file{libavutil/channel_layout.c} for the mapping between strings and
807 channel layout values.
811 # Set the sample rate to 48000 Hz and the channel layout to CH_LAYOUT_MONO
819 Buffer audio frames, and make them available to the filter chain.
821 This source is not intended to be part of user-supplied graph descriptions; it
822 is for insertion by calling programs, through the interface defined in
823 @file{libavfilter/buffersrc.h}.
825 It accepts the following parameters:
829 The timebase which will be used for timestamps of submitted frames. It must be
830 either a floating-point number or in @var{numerator}/@var{denominator} form.
833 The audio sample rate.
836 The name of the sample format, as returned by @code{av_get_sample_fmt_name()}.
839 The channel layout of the audio data, in the form that can be accepted by
840 @code{av_get_channel_layout()}.
843 All the parameters need to be explicitly defined.
845 @c man end AUDIO SOURCES
848 @c man begin AUDIO SINKS
850 Below is a description of the currently available audio sinks.
854 Null audio sink; do absolutely nothing with the input audio. It is
855 mainly useful as a template and for use in analysis / debugging
859 This sink is intended for programmatic use. Frames that arrive on this sink can
860 be retrieved by the calling program, using the interface defined in
861 @file{libavfilter/buffersink.h}.
863 It does not accept any parameters.
865 @c man end AUDIO SINKS
867 @chapter Video Filters
868 @c man begin VIDEO FILTERS
870 When you configure your Libav build, you can disable any of the
871 existing filters using --disable-filters.
872 The configure output will show the video filters included in your
875 Below is a description of the currently available video filters.
879 Detect frames that are (almost) completely black. Can be useful to
880 detect chapter transitions or commercials. Output lines consist of
881 the frame number of the detected frame, the percentage of blackness,
882 the position in the file if known or -1 and the timestamp in seconds.
884 In order to display the output lines, you need to set the loglevel at
885 least to the AV_LOG_INFO value.
887 It accepts the following parameters:
892 The percentage of the pixels that have to be below the threshold; it defaults to
896 The threshold below which a pixel value is considered black; it defaults to 32.
902 Apply a boxblur algorithm to the input video.
904 It accepts the following parameters:
917 The chroma and alpha parameters are optional. If not specified, they default
918 to the corresponding values set for @var{luma_radius} and
921 @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
922 the radius in pixels of the box used for blurring the corresponding
923 input plane. They are expressions, and can contain the following
927 The input width and height in pixels.
930 The input chroma image width and height in pixels.
933 The horizontal and vertical chroma subsample values. For example, for the
934 pixel format "yuv422p", @var{hsub} is 2 and @var{vsub} is 1.
937 The radius must be a non-negative number, and must not be greater than
938 the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
939 and of @code{min(cw,ch)/2} for the chroma planes.
941 @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
942 how many times the boxblur filter is applied to the corresponding
950 Apply a boxblur filter with the luma, chroma, and alpha radii
953 boxblur=luma_radius=2:luma_power=1
957 Set the luma radius to 2, and alpha and chroma radius to 0:
963 Set the luma and chroma radii to a fraction of the video dimension:
965 boxblur=luma_radius=min(h\,w)/10:luma_power=1:chroma_radius=min(cw\,ch)/10:chroma_power=1
972 Copy the input source unchanged to the output. This is mainly useful for
977 Crop the input video to given dimensions.
979 It accepts the following parameters:
984 The width of the output video.
987 The height of the output video.
990 The horizontal position, in the input video, of the left edge of the output
994 The vertical position, in the input video, of the top edge of the output video.
998 The parameters are expressions containing the following constants:
1002 These are approximated values for the mathematical constants e
1003 (Euler's number), pi (Greek pi), and phi (the golden ratio).
1006 The computed values for @var{x} and @var{y}. They are evaluated for
1010 The input width and height.
1013 These are the same as @var{in_w} and @var{in_h}.
1016 The output (cropped) width and height.
1019 These are the same as @var{out_w} and @var{out_h}.
1022 The number of the input frame, starting from 0.
1025 The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
1029 The @var{out_w} and @var{out_h} parameters specify the expressions for
1030 the width and height of the output (cropped) video. They are only
1031 evaluated during the configuration of the filter.
1033 The default value of @var{out_w} is "in_w", and the default value of
1034 @var{out_h} is "in_h".
1036 The expression for @var{out_w} may depend on the value of @var{out_h},
1037 and the expression for @var{out_h} may depend on @var{out_w}, but they
1038 cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
1039 evaluated after @var{out_w} and @var{out_h}.
1041 The @var{x} and @var{y} parameters specify the expressions for the
1042 position of the top-left corner of the output (non-cropped) area. They
1043 are evaluated for each frame. If the evaluated value is not valid, it
1044 is approximated to the nearest valid value.
1046 The default value of @var{x} is "(in_w-out_w)/2", and the default
1047 value for @var{y} is "(in_h-out_h)/2", which set the cropped area at
1048 the center of the input image.
1050 The expression for @var{x} may depend on @var{y}, and the expression
1051 for @var{y} may depend on @var{x}.
1055 # Crop the central input area with size 100x100
1056 crop=out_w=100:out_h=100
1058 # Crop the central input area with size 2/3 of the input video
1059 "crop=out_w=2/3*in_w:out_h=2/3*in_h"
1061 # Crop the input video central square
1064 # Delimit the rectangle with the top-left corner placed at position
1065 # 100:100 and the right-bottom corner corresponding to the right-bottom
1066 # corner of the input image
1067 crop=out_w=in_w-100:out_h=in_h-100:x=100:y=100
1069 # Crop 10 pixels from the left and right borders, and 20 pixels from
1070 # the top and bottom borders
1071 "crop=out_w=in_w-2*10:out_h=in_h-2*20"
1073 # Keep only the bottom right quarter of the input image
1074 "crop=out_w=in_w/2:out_h=in_h/2:x=in_w/2:y=in_h/2"
1076 # Crop height for getting Greek harmony
1077 "crop=out_w=in_w:out_h=1/PHI*in_w"
1080 "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)"
1082 # Erratic camera effect depending on timestamp
1083 "crop=out_w=in_w/2:out_h=in_h/2:x=(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):y=(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)"
1085 # Set x depending on the value of y
1086 "crop=in_w/2:in_h/2:y:10+10*sin(n/10)"
1091 Auto-detect the crop size.
1093 It calculates the necessary cropping parameters and prints the
1094 recommended parameters via the logging system. The detected dimensions
1095 correspond to the non-black area of the input video.
1097 It accepts the following parameters:
1102 The threshold, an optional parameter between nothing (0) and
1103 everything (255). It defaults to 24.
1106 The value which the width/height should be divisible by. It defaults to
1107 16. The offset is automatically adjusted to center the video. Use 2 to
1108 get only even dimensions (needed for 4:2:2 video). 16 is best when
1109 encoding to most video codecs.
1112 A counter that determines how many frames cropdetect will reset
1113 the previously detected largest video area after. It will then start over
1114 and detect the current optimal crop area. It defaults to 0.
1116 This can be useful when channel logos distort the video area. 0
1117 indicates 'never reset', and returns the largest area encountered during
1123 Suppress a TV station logo by a simple interpolation of the surrounding
1124 pixels. Just set a rectangle covering the logo and watch it disappear
1125 (and sometimes something even uglier appear - your mileage may vary).
1127 It accepts the following parameters:
1131 Specify the top left corner coordinates of the logo. They must be
1135 Specify the width and height of the logo to clear. They must be
1139 Specify the thickness of the fuzzy edge of the rectangle (added to
1140 @var{w} and @var{h}). The default value is 4.
1143 When set to 1, a green rectangle is drawn on the screen to simplify
1144 finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and
1145 @var{band} is set to 4. The default value is 0.
1154 Set a rectangle covering the area with top left corner coordinates 0,0
1155 and size 100x77, and a band of size 10:
1157 delogo=x=0:y=0:w=100:h=77:band=10
1164 Draw a colored box on the input image.
1166 It accepts the following parameters:
1171 Specify the top left corner coordinates of the box. It defaults to 0.
1174 Specify the width and height of the box; if 0 they are interpreted as
1175 the input width and height. It defaults to 0.
1178 Specify the color of the box to write. It can be the name of a color
1179 (case insensitive match) or a 0xRRGGBB[AA] sequence.
1184 # Draw a black box around the edge of the input image
1187 # Draw a box with color red and an opacity of 50%
1188 drawbox=x=10:y=20:width=200:height=60:color=red@@0.5"
1193 Draw a text string or text from a specified file on top of a video, using the
1194 libfreetype library.
1196 To enable compilation of this filter, you need to configure Libav with
1197 @code{--enable-libfreetype}.
1198 To enable default font fallback and the @var{font} option you need to
1199 configure Libav with @code{--enable-libfontconfig}.
1201 The filter also recognizes strftime() sequences in the provided text
1202 and expands them accordingly. Check the documentation of strftime().
1204 It accepts the following parameters:
1209 The font family to be used for drawing text. By default Sans.
1212 The font file to be used for drawing text. The path must be included.
1213 This parameter is mandatory if the fontconfig support is disabled.
1216 The text string to be drawn. The text must be a sequence of UTF-8
1218 This parameter is mandatory if no file is specified with the parameter
1222 A text file containing text to be drawn. The text must be a sequence
1223 of UTF-8 encoded characters.
1225 This parameter is mandatory if no text string is specified with the
1226 parameter @var{text}.
1228 If both text and textfile are specified, an error is thrown.
1231 The offsets where text will be drawn within the video frame.
1232 It is relative to the top/left border of the output image.
1233 They accept expressions similar to the @ref{overlay} filter:
1237 The computed values for @var{x} and @var{y}. They are evaluated for
1240 @item main_w, main_h
1241 The main input width and height.
1244 These are the same as @var{main_w} and @var{main_h}.
1246 @item text_w, text_h
1247 The rendered text's width and height.
1250 These are the same as @var{text_w} and @var{text_h}.
1253 The number of frames processed, starting from 0.
1256 The timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.
1260 The default value of @var{x} and @var{y} is 0.
1263 Draw the text only if the expression evaluates as non-zero.
1264 The expression accepts the same variables @var{x, y} do.
1265 The default value is 1.
1268 Draw the text applying alpha blending. The value can
1269 be either a number between 0.0 and 1.0
1270 The expression accepts the same variables @var{x, y} do.
1271 The default value is 1.
1274 The font size to be used for drawing text.
1275 The default value of @var{fontsize} is 16.
1278 The color to be used for drawing fonts.
1279 It is either a string (e.g. "red"), or in 0xRRGGBB[AA] format
1280 (e.g. "0xff000033"), possibly followed by an alpha specifier.
1281 The default value of @var{fontcolor} is "black".
1284 The color to be used for drawing box around text.
1285 It is either a string (e.g. "yellow") or in 0xRRGGBB[AA] format
1286 (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1287 The default value of @var{boxcolor} is "white".
1290 Used to draw a box around text using the background color.
1291 The value must be either 1 (enable) or 0 (disable).
1292 The default value of @var{box} is 0.
1294 @item shadowx, shadowy
1295 The x and y offsets for the text shadow position with respect to the
1296 position of the text. They can be either positive or negative
1297 values. The default value for both is "0".
1300 The color to be used for drawing a shadow behind the drawn text. It
1301 can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA]
1302 form (e.g. "0xff00ff"), possibly followed by an alpha specifier.
1303 The default value of @var{shadowcolor} is "black".
1306 The flags to be used for loading the fonts.
1308 The flags map the corresponding flags supported by libfreetype, and are
1309 a combination of the following values:
1316 @item vertical_layout
1317 @item force_autohint
1320 @item ignore_global_advance_width
1322 @item ignore_transform
1329 Default value is "render".
1331 For more information consult the documentation for the FT_LOAD_*
1335 The size in number of spaces to use for rendering the tab.
1339 If true, check and fix text coords to avoid clipping.
1342 For example the command:
1344 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
1347 will draw "Test Text" with font FreeSerif, using the default values
1348 for the optional parameters.
1352 drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
1353 x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2"
1356 will draw 'Test Text' with font FreeSerif of size 24 at position x=100
1357 and y=50 (counting from the top-left corner of the screen), text is
1358 yellow with a red box around it. Both the text and the box have an
1361 Note that the double quotes are not necessary if spaces are not used
1362 within the parameter list.
1364 For more information about libfreetype, check:
1365 @url{http://www.freetype.org/}.
1369 Apply a fade-in/out effect to the input video.
1371 It accepts the following parameters:
1376 The effect type can be either "in" for a fade-in, or "out" for a fade-out
1380 The number of the frame to start applying the fade effect at.
1383 The number of frames that the fade effect lasts. At the end of the
1384 fade-in effect, the output video will have the same intensity as the input video.
1385 At the end of the fade-out transition, the output video will be completely black.
1391 # Fade in the first 30 frames of video
1392 fade=type=in:nb_frames=30
1394 # Fade out the last 45 frames of a 200-frame video
1395 fade=type=out:start_frame=155:nb_frames=45
1397 # Fade in the first 25 frames and fade out the last 25 frames of a 1000-frame video
1398 fade=type=in:start_frame=0:nb_frames=25, fade=type=out:start_frame=975:nb_frames=25
1400 # Make the first 5 frames black, then fade in from frame 5-24
1401 fade=type=in:start_frame=5:nb_frames=20
1406 Transform the field order of the input video.
1408 It accepts the following parameters:
1413 The output field order. Valid values are @var{tff} for top field first or @var{bff}
1414 for bottom field first.
1417 The default value is "tff".
1419 The transformation is done by shifting the picture content up or down
1420 by one line, and filling the remaining line with appropriate picture content.
1421 This method is consistent with most broadcast field order converters.
1423 If the input video is not flagged as being interlaced, or it is already
1424 flagged as being of the required output field order, then this filter does
1425 not alter the incoming video.
1427 It is very useful when converting to or from PAL DV material,
1428 which is bottom field first.
1432 ./avconv -i in.vob -vf "fieldorder=order=bff" out.dv
1437 Buffer input images and send them when they are requested.
1439 It is mainly useful when auto-inserted by the libavfilter
1442 It does not take parameters.
1446 Convert the input video to one of the specified pixel formats.
1447 Libavfilter will try to pick one that is suitable as input to
1450 It accepts the following parameters:
1454 A '|'-separated list of pixel format names, such as
1455 "pix_fmts=yuv420p|monow|rgb24".
1461 # Convert the input video to the "yuv420p" format
1462 format=pix_fmts=yuv420p
1464 # Convert the input video to any of the formats in the list
1465 format=pix_fmts=yuv420p|yuv444p|yuv410p
1471 Convert the video to specified constant framerate by duplicating or dropping
1472 frames as necessary.
1474 It accepts the following parameters:
1478 The desired output framerate.
1481 Assume the first PTS should be the given value, in seconds. This allows for
1482 padding/trimming at the start of stream. By default, no assumption is made
1483 about the first frame's expected PTS, so no padding or trimming is done.
1484 For example, this could be set to 0 to pad the beginning with duplicates of
1485 the first frame if a video stream starts after the audio stream or to trim any
1486 frames with a negative PTS.
1492 Pack two different video streams into a stereoscopic video, setting proper
1493 metadata on supported codecs. The two views should have the same size and
1494 framerate and processing will stop when the shorter video ends. Please note
1495 that you may conveniently adjust view properties with the @ref{scale} and
1498 It accepts the following parameters:
1502 The desired packing format. Supported values are:
1507 The views are next to each other (default).
1510 The views are on top of each other.
1513 The views are packed by line.
1516 The views are packed by column.
1519 The views are temporally interleaved.
1528 # Convert left and right views into a frame-sequential video
1529 avconv -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT
1531 # Convert views into a side-by-side video with the same output resolution as the input
1532 avconv -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT
1538 Apply a frei0r effect to the input video.
1540 To enable the compilation of this filter, you need to install the frei0r
1541 header and configure Libav with --enable-frei0r.
1543 It accepts the following parameters:
1548 The name of the frei0r effect to load. If the environment variable
1549 @env{FREI0R_PATH} is defined, the frei0r effect is searched for in each of the
1550 directories specified by the colon-separated list in @env{FREIOR_PATH}.
1551 Otherwise, the standard frei0r paths are searched, in this order:
1552 @file{HOME/.frei0r-1/lib/}, @file{/usr/local/lib/frei0r-1/},
1553 @file{/usr/lib/frei0r-1/}.
1556 A '|'-separated list of parameters to pass to the frei0r effect.
1560 A frei0r effect parameter can be a boolean (its value is either
1561 "y" or "n"), a double, a color (specified as
1562 @var{R}/@var{G}/@var{B}, where @var{R}, @var{G}, and @var{B} are floating point
1563 numbers between 0.0 and 1.0, inclusive) or by an @code{av_parse_color()} color
1564 description), a position (specified as @var{X}/@var{Y}, where
1565 @var{X} and @var{Y} are floating point numbers) and/or a string.
1567 The number and types of parameters depend on the loaded effect. If an
1568 effect parameter is not specified, the default value is set.
1572 # Apply the distort0r effect, setting the first two double parameters
1573 frei0r=filter_name=distort0r:filter_params=0.5|0.01
1575 # Apply the colordistance effect, taking a color as the first parameter
1576 frei0r=colordistance:0.2/0.3/0.4
1577 frei0r=colordistance:violet
1578 frei0r=colordistance:0x112233
1580 # Apply the perspective effect, specifying the top left and top right
1582 frei0r=perspective:0.2/0.2|0.8/0.2
1585 For more information, see
1586 @url{http://piksel.org/frei0r}
1590 Fix the banding artifacts that are sometimes introduced into nearly flat
1591 regions by truncation to 8-bit colordepth.
1592 Interpolate the gradients that should go where the bands are, and
1595 It is designed for playback only. Do not use it prior to
1596 lossy compression, because compression tends to lose the dither and
1597 bring back the bands.
1599 It accepts the following parameters:
1604 The maximum amount by which the filter will change any one pixel. This is also
1605 the threshold for detecting nearly flat regions. Acceptable values range from
1606 .51 to 64; the default value is 1.2. Out-of-range values will be clipped to the
1610 The neighborhood to fit the gradient to. A larger radius makes for smoother
1611 gradients, but also prevents the filter from modifying the pixels near detailed
1612 regions. Acceptable values are 8-32; the default value is 16. Out-of-range
1613 values will be clipped to the valid range.
1618 # Default parameters
1619 gradfun=strength=1.2:radius=16
1621 # Omitting the radius
1627 Flip the input video horizontally.
1629 For example, to horizontally flip the input video with @command{avconv}:
1631 avconv -i in.avi -vf "hflip" out.avi
1636 This is a high precision/quality 3d denoise filter. It aims to reduce
1637 image noise, producing smooth images and making still images really
1638 still. It should enhance compressibility.
1640 It accepts the following optional parameters:
1644 A non-negative floating point number which specifies spatial luma strength.
1647 @item chroma_spatial
1648 A non-negative floating point number which specifies spatial chroma strength.
1649 It defaults to 3.0*@var{luma_spatial}/4.0.
1652 A floating point number which specifies luma temporal strength. It defaults to
1653 6.0*@var{luma_spatial}/4.0.
1656 A floating point number which specifies chroma temporal strength. It defaults to
1657 @var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}.
1662 Download hardware frames to system memory.
1664 The input must be in hardware frames, and the output a non-hardware format.
1665 Not all formats will be supported on the output - it may be necessary to insert
1666 an additional @option{format} filter immediately following in the graph to get
1667 the output in a supported format.
1671 Map hardware frames to system memory or to another device.
1673 This filter has several different modes of operation; which one is used depends
1674 on the input and output formats:
1677 Hardware frame input, normal frame output
1679 Map the input frames to system memory and pass them to the output. If the
1680 original hardware frame is later required (for example, after overlaying
1681 something else on part of it), the @option{hwmap} filter can be used again
1682 in the next mode to retrieve it.
1684 Normal frame input, hardware frame output
1686 If the input is actually a software-mapped hardware frame, then unmap it -
1687 that is, return the original hardware frame.
1689 Otherwise, a device must be provided. Create new hardware surfaces on that
1690 device for the output, then map them back to the software format at the input
1691 and give those frames to the preceding filter. This will then act like the
1692 @option{hwupload} filter, but may be able to avoid an additional copy when
1693 the input is already in a compatible format.
1695 Hardware frame input and output
1697 A device must be supplied for the output, either directly or with the
1698 @option{derive_device} option. The input and output devices must be of
1699 different types and compatible - the exact meaning of this is
1700 system-dependent, but typically it means that they must refer to the same
1701 underlying hardware context (for example, refer to the same graphics card).
1703 If the input frames were originally created on the output device, then unmap
1704 to retrieve the original frames.
1706 Otherwise, map the frames to the output device - create new hardware frames
1707 on the output corresponding to the frames on the input.
1710 The following additional parameters are accepted:
1714 Set the frame mapping mode. Some combination of:
1717 The mapped frame should be readable.
1719 The mapped frame should be writeable.
1721 The mapping will always overwrite the entire frame.
1723 This may improve performance in some cases, as the original contents of the
1724 frame need not be loaded.
1726 The mapping must not involve any copying.
1728 Indirect mappings to copies of frames are created in some cases where either
1729 direct mapping is not possible or it would have unexpected properties.
1730 Setting this flag ensures that the mapping is direct and will fail if that is
1733 Defaults to @var{read+write} if not specified.
1735 @item derive_device @var{type}
1736 Rather than using the device supplied at initialisation, instead derive a new
1737 device of type @var{type} from the device the input frames exist on.
1740 In a hardware to hardware mapping, map in reverse - create frames in the sink
1741 and map them back to the source. This may be necessary in some cases where
1742 a mapping in one direction is required but only the opposite direction is
1743 supported by the devices being used.
1745 This option is dangerous - it may break the preceding filter in undefined
1746 ways if there are any additional constraints on that filter's output.
1747 Do not use it without fully understanding the implications of its use.
1752 Upload system memory frames to hardware surfaces.
1754 The device to upload to must be supplied when the filter is initialised. If
1755 using avconv, select the appropriate device with the @option{-filter_hw_device}
1758 @section hwupload_cuda
1760 Upload system memory frames to a CUDA device.
1762 It accepts the following optional parameters:
1766 The number of the CUDA device to use
1771 Simple interlacing filter from progressive contents. This interleaves upper (or
1772 lower) lines from odd frames with lower (or upper) lines from even frames,
1773 halving the frame rate and preserving image height.
1776 Original Original New Frame
1777 Frame 'j' Frame 'j+1' (tff)
1778 ========== =========== ==================
1779 Line 0 --------------------> Frame 'j' Line 0
1780 Line 1 Line 1 ----> Frame 'j+1' Line 1
1781 Line 2 ---------------------> Frame 'j' Line 2
1782 Line 3 Line 3 ----> Frame 'j+1' Line 3
1784 New Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on
1787 It accepts the following optional parameters:
1791 This determines whether the interlaced frame is taken from the even
1792 (tff - default) or odd (bff) lines of the progressive frame.
1795 Enable (default) or disable the vertical lowpass filter to avoid twitter
1796 interlacing and reduce moire patterns.
1799 @section lut, lutrgb, lutyuv
1801 Compute a look-up table for binding each pixel component input value
1802 to an output value, and apply it to the input video.
1804 @var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
1805 to an RGB input video.
1807 These filters accept the following parameters:
1809 @item @var{c0} (first pixel component)
1810 @item @var{c1} (second pixel component)
1811 @item @var{c2} (third pixel component)
1812 @item @var{c3} (fourth pixel component, corresponds to the alpha component)
1814 @item @var{r} (red component)
1815 @item @var{g} (green component)
1816 @item @var{b} (blue component)
1817 @item @var{a} (alpha component)
1819 @item @var{y} (Y/luminance component)
1820 @item @var{u} (U/Cb component)
1821 @item @var{v} (V/Cr component)
1824 Each of them specifies the expression to use for computing the lookup table for
1825 the corresponding pixel component values.
1827 The exact component associated to each of the @var{c*} options depends on the
1830 The @var{lut} filter requires either YUV or RGB pixel formats in input,
1831 @var{lutrgb} requires RGB pixel formats in input, and @var{lutyuv} requires YUV.
1833 The expressions can contain the following constants and functions:
1837 These are approximated values for the mathematical constants e
1838 (Euler's number), pi (Greek pi), and phi (the golden ratio).
1841 The input width and height.
1844 The input value for the pixel component.
1847 The input value, clipped to the @var{minval}-@var{maxval} range.
1850 The maximum value for the pixel component.
1853 The minimum value for the pixel component.
1856 The negated value for the pixel component value, clipped to the
1857 @var{minval}-@var{maxval} range; it corresponds to the expression
1858 "maxval-clipval+minval".
1861 The computed value in @var{val}, clipped to the
1862 @var{minval}-@var{maxval} range.
1864 @item gammaval(gamma)
1865 The computed gamma correction value of the pixel component value,
1866 clipped to the @var{minval}-@var{maxval} range. It corresponds to the
1868 "pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
1872 All expressions default to "val".
1876 # Negate input video
1877 lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
1878 lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
1880 # The above is the same as
1881 lutrgb="r=negval:g=negval:b=negval"
1882 lutyuv="y=negval:u=negval:v=negval"
1887 # Remove chroma components, turning the video into a graytone image
1888 lutyuv="u=128:v=128"
1890 # Apply a luma burning effect
1893 # Remove green and blue components
1896 # Set a constant alpha channel value on input
1897 format=rgba,lutrgb=a="maxval-minval/2"
1899 # Correct luminance gamma by a factor of 0.5
1900 lutyuv=y=gammaval(0.5)
1907 It accepts an integer in input; if non-zero it negates the
1908 alpha component (if available). The default value in input is 0.
1912 Force libavfilter not to use any of the specified pixel formats for the
1913 input to the next filter.
1915 It accepts the following parameters:
1919 A '|'-separated list of pixel format names, such as
1920 apix_fmts=yuv420p|monow|rgb24".
1926 # Force libavfilter to use a format different from "yuv420p" for the
1927 # input to the vflip filter
1928 noformat=pix_fmts=yuv420p,vflip
1930 # Convert the input video to any of the formats not contained in the list
1931 noformat=yuv420p|yuv444p|yuv410p
1936 Pass the video source unchanged to the output.
1940 Apply a video transform using libopencv.
1942 To enable this filter, install the libopencv library and headers and
1943 configure Libav with --enable-libopencv.
1945 It accepts the following parameters:
1950 The name of the libopencv filter to apply.
1953 The parameters to pass to the libopencv filter. If not specified, the default
1958 Refer to the official libopencv documentation for more precise
1960 @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html}
1962 Several libopencv filters are supported; see the following subsections.
1967 Dilate an image by using a specific structuring element.
1968 It corresponds to the libopencv function @code{cvDilate}.
1970 It accepts the parameters: @var{struct_el}|@var{nb_iterations}.
1972 @var{struct_el} represents a structuring element, and has the syntax:
1973 @var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape}
1975 @var{cols} and @var{rows} represent the number of columns and rows of
1976 the structuring element, @var{anchor_x} and @var{anchor_y} the anchor
1977 point, and @var{shape} the shape for the structuring element. @var{shape}
1978 must be "rect", "cross", "ellipse", or "custom".
1980 If the value for @var{shape} is "custom", it must be followed by a
1981 string of the form "=@var{filename}". The file with name
1982 @var{filename} is assumed to represent a binary image, with each
1983 printable character corresponding to a bright pixel. When a custom
1984 @var{shape} is used, @var{cols} and @var{rows} are ignored, the number
1985 or columns and rows of the read file are assumed instead.
1987 The default value for @var{struct_el} is "3x3+0x0/rect".
1989 @var{nb_iterations} specifies the number of times the transform is
1990 applied to the image, and defaults to 1.
1994 # Use the default values
1997 # Dilate using a structuring element with a 5x5 cross, iterating two times
1998 ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2
2000 # Read the shape from the file diamond.shape, iterating two times.
2001 # The file diamond.shape may contain a pattern of characters like this
2007 # The specified columns and rows are ignored
2008 # but the anchor point coordinates are not
2009 ocv=dilate:0x0+2x2/custom=diamond.shape|2
2014 Erode an image by using a specific structuring element.
2015 It corresponds to the libopencv function @code{cvErode}.
2017 It accepts the parameters: @var{struct_el}:@var{nb_iterations},
2018 with the same syntax and semantics as the @ref{dilate} filter.
2022 Smooth the input video.
2024 The filter takes the following parameters:
2025 @var{type}|@var{param1}|@var{param2}|@var{param3}|@var{param4}.
2027 @var{type} is the type of smooth filter to apply, and must be one of
2028 the following values: "blur", "blur_no_scale", "median", "gaussian",
2029 or "bilateral". The default value is "gaussian".
2031 The meaning of @var{param1}, @var{param2}, @var{param3}, and @var{param4}
2032 depend on the smooth type. @var{param1} and
2033 @var{param2} accept integer positive values or 0. @var{param3} and
2034 @var{param4} accept floating point values.
2036 The default value for @var{param1} is 3. The default value for the
2037 other parameters is 0.
2039 These parameters correspond to the parameters assigned to the
2040 libopencv function @code{cvSmooth}.
2045 Overlay one video on top of another.
2047 It takes two inputs and has one output. The first input is the "main"
2048 video on which the second input is overlaid.
2050 It accepts the following parameters:
2055 The horizontal position of the left edge of the overlaid video on the main video.
2058 The vertical position of the top edge of the overlaid video on the main video.
2062 The parameters are expressions containing the following parameters:
2065 @item main_w, main_h
2066 The main input width and height.
2069 These are the same as @var{main_w} and @var{main_h}.
2071 @item overlay_w, overlay_h
2072 The overlay input width and height.
2075 These are the same as @var{overlay_w} and @var{overlay_h}.
2078 The action to take when EOF is encountered on the secondary input; it accepts
2079 one of the following values:
2083 Repeat the last frame (the default).
2087 Pass the main input through.
2092 Be aware that frames are taken from each input video in timestamp
2093 order, hence, if their initial timestamps differ, it is a a good idea
2094 to pass the two inputs through a @var{setpts=PTS-STARTPTS} filter to
2095 have them begin in the same zero timestamp, as the example for
2096 the @var{movie} filter does.
2100 # Draw the overlay at 10 pixels from the bottom right
2101 # corner of the main video
2102 overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10
2104 # Insert a transparent PNG logo in the bottom left corner of the input
2105 avconv -i input -i logo -filter_complex 'overlay=x=10:y=main_h-overlay_h-10' output
2107 # Insert 2 different transparent PNG logos (second logo on bottom
2109 avconv -i input -i logo1 -i logo2 -filter_complex
2110 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output
2112 # Add a transparent color layer on top of the main video;
2113 # WxH specifies the size of the main input to the overlay filter
2114 color=red@.3:WxH [over]; [in][over] overlay [out]
2116 # Mask 10-20 seconds of a video by applying the delogo filter to a section
2117 avconv -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k
2118 -vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]'
2122 You can chain together more overlays but the efficiency of such
2123 approach is yet to be tested.
2127 Add paddings to the input image, and place the original input at the
2128 provided @var{x}, @var{y} coordinates.
2130 It accepts the following parameters:
2135 Specify the size of the output image with the paddings added. If the
2136 value for @var{width} or @var{height} is 0, the corresponding input size
2137 is used for the output.
2139 The @var{width} expression can reference the value set by the
2140 @var{height} expression, and vice versa.
2142 The default value of @var{width} and @var{height} is 0.
2146 Specify the offsets to place the input image at within the padded area,
2147 with respect to the top/left border of the output image.
2149 The @var{x} expression can reference the value set by the @var{y}
2150 expression, and vice versa.
2152 The default value of @var{x} and @var{y} is 0.
2156 Specify the color of the padded area. It can be the name of a color
2157 (case insensitive match) or an 0xRRGGBB[AA] sequence.
2159 The default value of @var{color} is "black".
2163 The parameters @var{width}, @var{height}, @var{x}, and @var{y} are
2164 expressions containing the following constants:
2168 These are approximated values for the mathematical constants e
2169 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2172 The input video width and height.
2175 These are the same as @var{in_w} and @var{in_h}.
2178 The output width and height (the size of the padded area), as
2179 specified by the @var{width} and @var{height} expressions.
2182 These are the same as @var{out_w} and @var{out_h}.
2185 The x and y offsets as specified by the @var{x} and @var{y}
2186 expressions, or NAN if not yet specified.
2189 The input display aspect ratio, same as @var{iw} / @var{ih}.
2192 The horizontal and vertical chroma subsample values. For example for the
2193 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2199 # Add paddings with the color "violet" to the input video. The output video
2200 # size is 640x480, and the top-left corner of the input video is placed at
2202 pad=width=640:height=480:x=0:y=40:color=violet
2204 # Pad the input to get an output with dimensions increased by 3/2,
2205 # and put the input video at the center of the padded area
2206 pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
2208 # Pad the input to get a squared output with size equal to the maximum
2209 # value between the input width and height, and put the input video at
2210 # the center of the padded area
2211 pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
2213 # Pad the input to get a final w/h ratio of 16:9
2214 pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
2216 # Double the output size and put the input video in the bottom-right
2217 # corner of the output padded area
2218 pad="2*iw:2*ih:ow-iw:oh-ih"
2221 @section pixdesctest
2223 Pixel format descriptor test filter, mainly useful for internal
2224 testing. The output video should be equal to the input video.
2228 format=monow, pixdesctest
2231 can be used to test the monowhite pixel format descriptor definition.
2236 Scale the input video and/or convert the image format.
2238 It accepts the following parameters:
2243 The output video width.
2246 The output video height.
2250 The parameters @var{w} and @var{h} are expressions containing
2251 the following constants:
2255 These are approximated values for the mathematical constants e
2256 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2259 The input width and height.
2262 These are the same as @var{in_w} and @var{in_h}.
2265 The output (cropped) width and height.
2268 These are the same as @var{out_w} and @var{out_h}.
2271 This is the same as @var{iw} / @var{ih}.
2274 input sample aspect ratio
2277 The input display aspect ratio; it is the same as
2278 (@var{iw} / @var{ih}) * @var{sar}.
2281 The horizontal and vertical chroma subsample values. For example, for the
2282 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2285 If the input image format is different from the format requested by
2286 the next filter, the scale filter will convert the input to the
2289 If the value for @var{w} or @var{h} is 0, the respective input
2290 size is used for the output.
2292 If the value for @var{w} or @var{h} is -1, the scale filter will use, for the
2293 respective output size, a value that maintains the aspect ratio of the input
2296 The default value of @var{w} and @var{h} is 0.
2300 # Scale the input video to a size of 200x100
2303 # Scale the input to 2x
2305 # The above is the same as
2308 # Scale the input to half the original size
2311 # Increase the width, and set the height to the same size
2314 # Seek Greek harmony
2318 # Increase the height, and set the width to 3/2 of the height
2319 scale=w=3/2*oh:h=3/5*ih
2321 # Increase the size, making the size a multiple of the chroma
2322 scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
2324 # Increase the width to a maximum of 500 pixels,
2325 # keeping the same aspect ratio as the input
2326 scale=w='min(500\, iw*3/2):h=-1'
2331 Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel
2332 format conversion on CUDA video frames. Setting the output width and height
2333 works in the same way as for the @var{scale} filter.
2335 The following additional options are accepted:
2338 The pixel format of the output CUDA frames. If set to the string "same" (the
2339 default), the input format will be kept. Note that automatic format negotiation
2340 and conversion is not yet supported for hardware frames
2343 The interpolation algorithm used for resizing. One of the following:
2350 @item cubic2p_bspline
2351 2-parameter cubic (B=1, C=0)
2353 @item cubic2p_catmullrom
2354 2-parameter cubic (B=0, C=1/2)
2356 @item cubic2p_b05c03
2357 2-parameter cubic (B=1/2, C=3/10)
2368 Select frames to pass in output.
2370 It accepts the following parameters:
2375 An expression, which is evaluated for each input frame. If the expression is
2376 evaluated to a non-zero value, the frame is selected and passed to the output,
2377 otherwise it is discarded.
2381 The expression can contain the following constants:
2385 These are approximated values for the mathematical constants e
2386 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2389 The (sequential) number of the filtered frame, starting from 0.
2392 The (sequential) number of the selected frame, starting from 0.
2394 @item prev_selected_n
2395 The sequential number of the last selected frame. It's NAN if undefined.
2398 The timebase of the input timestamps.
2401 The PTS (Presentation TimeStamp) of the filtered video frame,
2402 expressed in @var{TB} units. It's NAN if undefined.
2405 The PTS of the filtered video frame,
2406 expressed in seconds. It's NAN if undefined.
2409 The PTS of the previously filtered video frame. It's NAN if undefined.
2411 @item prev_selected_pts
2412 The PTS of the last previously filtered video frame. It's NAN if undefined.
2414 @item prev_selected_t
2415 The PTS of the last previously selected video frame. It's NAN if undefined.
2418 The PTS of the first video frame in the video. It's NAN if undefined.
2421 The time of the first video frame in the video. It's NAN if undefined.
2424 The type of the filtered frame. It can assume one of the following
2436 @item interlace_type
2437 The frame interlace type. It can assume one of the following values:
2440 The frame is progressive (not interlaced).
2442 The frame is top-field-first.
2444 The frame is bottom-field-first.
2448 This is 1 if the filtered frame is a key-frame, 0 otherwise.
2452 The default value of the select expression is "1".
2457 # Select all the frames in input
2460 # The above is the same as
2466 # Select only I-frames
2467 select='expr=eq(pict_type\,I)'
2469 # Select one frame per 100
2470 select='not(mod(n\,100))'
2472 # Select only frames contained in the 10-20 time interval
2473 select='gte(t\,10)*lte(t\,20)'
2475 # Select only I-frames contained in the 10-20 time interval
2476 select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)'
2478 # Select frames with a minimum distance of 10 seconds
2479 select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
2485 Set the Display Aspect Ratio for the filter output video.
2487 This is done by changing the specified Sample (aka Pixel) Aspect
2488 Ratio, according to the following equation:
2489 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2491 Keep in mind that this filter does not modify the pixel dimensions of
2492 the video frame. Also, the display aspect ratio set by this filter may
2493 be changed by later filters in the filterchain, e.g. in case of
2494 scaling or if another "setdar" or a "setsar" filter is applied.
2496 It accepts the following parameters:
2501 The output display aspect ratio.
2505 The parameter @var{dar} is an expression containing
2506 the following constants:
2510 These are approximated values for the mathematical constants e
2511 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2514 The input width and height.
2517 This is the same as @var{w} / @var{h}.
2520 The input sample aspect ratio.
2523 The input display aspect ratio. It is the same as
2524 (@var{w} / @var{h}) * @var{sar}.
2527 The horizontal and vertical chroma subsample values. For example, for the
2528 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2531 To change the display aspect ratio to 16:9, specify:
2534 # The above is equivalent to
2538 Also see the the @ref{setsar} filter documentation.
2542 Change the PTS (presentation timestamp) of the input video frames.
2544 It accepts the following parameters:
2549 The expression which is evaluated for each frame to construct its timestamp.
2553 The expression is evaluated through the eval API and can contain the following
2558 The presentation timestamp in input.
2561 These are approximated values for the mathematical constants e
2562 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2565 The count of the input frame, starting from 0.
2568 The PTS of the first video frame.
2571 State whether the current frame is interlaced.
2574 The previous input PTS.
2577 The previous output PTS.
2580 The wallclock (RTC) time in microseconds.
2583 The wallclock (RTC) time at the start of the movie in microseconds.
2586 The timebase of the input timestamps.
2593 # Start counting the PTS from zero
2594 setpts=expr=PTS-STARTPTS
2605 # Fixed rate 25 fps with some jitter
2606 setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
2608 # Generate timestamps from a "live source" and rebase onto the current timebase
2609 setpts='(RTCTIME - RTCSTART) / (TB * 1000000)"
2615 Set the Sample (aka Pixel) Aspect Ratio for the filter output video.
2617 Note that as a consequence of the application of this filter, the
2618 output display aspect ratio will change according to the following
2620 @math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR}
2622 Keep in mind that the sample aspect ratio set by this filter may be
2623 changed by later filters in the filterchain, e.g. if another "setsar"
2624 or a "setdar" filter is applied.
2626 It accepts the following parameters:
2631 The output sample aspect ratio.
2635 The parameter @var{sar} is an expression containing
2636 the following constants:
2640 These are approximated values for the mathematical constants e
2641 (Euler's number), pi (Greek pi), and phi (the golden ratio).
2644 The input width and height.
2647 These are the same as @var{w} / @var{h}.
2650 The input sample aspect ratio.
2653 The input display aspect ratio. It is the same as
2654 (@var{w} / @var{h}) * @var{sar}.
2657 Horizontal and vertical chroma subsample values. For example, for the
2658 pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
2661 To change the sample aspect ratio to 10:11, specify:
2668 Set the timebase to use for the output frames timestamps.
2669 It is mainly useful for testing timebase configuration.
2671 It accepts the following parameters:
2676 The expression which is evaluated into the output timebase.
2680 The expression can contain the constants "PI", "E", "PHI", "AVTB" (the
2681 default timebase), and "intb" (the input timebase).
2683 The default value for the input is "intb".
2688 # Set the timebase to 1/25
2691 # Set the timebase to 1/10
2694 # Set the timebase to 1001/1000
2697 #Set the timebase to 2*intb
2700 #Set the default timebase value
2706 Show a line containing various information for each input video frame.
2707 The input video is not modified.
2709 The shown line contains a sequence of key/value pairs of the form
2710 @var{key}:@var{value}.
2712 It accepts the following parameters:
2716 The (sequential) number of the input frame, starting from 0.
2719 The Presentation TimeStamp of the input frame, expressed as a number of
2720 time base units. The time base unit depends on the filter input pad.
2723 The Presentation TimeStamp of the input frame, expressed as a number of
2727 The position of the frame in the input stream, or -1 if this information is
2728 unavailable and/or meaningless (for example in case of synthetic video).
2731 The pixel format name.
2734 The sample aspect ratio of the input frame, expressed in the form
2735 @var{num}/@var{den}.
2738 The size of the input frame, expressed in the form
2739 @var{width}x@var{height}.
2742 The type of interlaced mode ("P" for "progressive", "T" for top field first, "B"
2743 for bottom field first).
2746 This is 1 if the frame is a key frame, 0 otherwise.
2749 The picture type of the input frame ("I" for an I-frame, "P" for a
2750 P-frame, "B" for a B-frame, or "?" for an unknown type).
2751 Also refer to the documentation of the @code{AVPictureType} enum and of
2752 the @code{av_get_picture_type_char} function defined in
2753 @file{libavutil/avutil.h}.
2756 The Adler-32 checksum of all the planes of the input frame.
2758 @item plane_checksum
2759 The Adler-32 checksum of each plane of the input frame, expressed in the form
2760 "[@var{c0} @var{c1} @var{c2} @var{c3}]".
2763 @section shuffleplanes
2765 Reorder and/or duplicate video planes.
2767 It accepts the following parameters:
2772 The index of the input plane to be used as the first output plane.
2775 The index of the input plane to be used as the second output plane.
2778 The index of the input plane to be used as the third output plane.
2781 The index of the input plane to be used as the fourth output plane.
2785 The first plane has the index 0. The default is to keep the input unchanged.
2787 Swap the second and third planes of the input:
2789 avconv -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
2794 Split input video into several identical outputs.
2796 It accepts a single parameter, which specifies the number of outputs. If
2797 unspecified, it defaults to 2.
2799 Create 5 copies of the input video:
2801 avconv -i INPUT -filter_complex split=5 OUTPUT
2806 Transpose rows with columns in the input video and optionally flip it.
2808 It accepts the following parameters:
2813 The direction of the transpose.
2817 The direction can assume the following values:
2821 Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
2829 Rotate by 90 degrees clockwise, that is:
2837 Rotate by 90 degrees counterclockwise, that is:
2845 Rotate by 90 degrees clockwise and vertically flip, that is:
2854 Trim the input so that the output contains one continuous subpart of the input.
2856 It accepts the following parameters:
2859 The timestamp (in seconds) of the start of the kept section. The frame with the
2860 timestamp @var{start} will be the first frame in the output.
2863 The timestamp (in seconds) of the first frame that will be dropped. The frame
2864 immediately preceding the one with the timestamp @var{end} will be the last
2865 frame in the output.
2868 This is the same as @var{start}, except this option sets the start timestamp
2869 in timebase units instead of seconds.
2872 This is the same as @var{end}, except this option sets the end timestamp
2873 in timebase units instead of seconds.
2876 The maximum duration of the output in seconds.
2879 The number of the first frame that should be passed to the output.
2882 The number of the first frame that should be dropped.
2885 Note that the first two sets of the start/end options and the @option{duration}
2886 option look at the frame timestamp, while the _frame variants simply count the
2887 frames that pass through the filter. Also note that this filter does not modify
2888 the timestamps. If you wish for the output timestamps to start at zero, insert a
2889 setpts filter after the trim filter.
2891 If multiple start or end options are set, this filter tries to be greedy and
2892 keep all the frames that match at least one of the specified constraints. To keep
2893 only the part that matches all the constraints at once, chain multiple trim
2896 The defaults are such that all the input is kept. So it is possible to set e.g.
2897 just the end values to keep everything before the specified time.
2902 Drop everything except the second minute of input:
2904 avconv -i INPUT -vf trim=60:120
2908 Keep only the first second:
2910 avconv -i INPUT -vf trim=duration=1
2916 Sharpen or blur the input video.
2918 It accepts the following parameters:
2923 Set the luma matrix horizontal size. It must be an integer between 3
2924 and 13. The default value is 5.
2927 Set the luma matrix vertical size. It must be an integer between 3
2928 and 13. The default value is 5.
2931 Set the luma effect strength. It must be a floating point number between -2.0
2932 and 5.0. The default value is 1.0.
2934 @item chroma_msize_x
2935 Set the chroma matrix horizontal size. It must be an integer between 3
2936 and 13. The default value is 5.
2938 @item chroma_msize_y
2939 Set the chroma matrix vertical size. It must be an integer between 3
2940 and 13. The default value is 5.
2943 Set the chroma effect strength. It must be a floating point number between -2.0
2944 and 5.0. The default value is 0.0.
2948 Negative values for the amount will blur the input video, while positive
2949 values will sharpen. All parameters are optional and default to the
2950 equivalent of the string '5:5:1.0:5:5:0.0'.
2953 # Strong luma sharpen effect parameters
2954 unsharp=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5
2956 # A strong blur of both luma and chroma parameters
2957 unsharp=7:7:-2:7:7:-2
2959 # Use the default values with @command{avconv}
2960 ./avconv -i in.avi -vf "unsharp" out.mp4
2965 Flip the input video vertically.
2968 ./avconv -i in.avi -vf "vflip" out.avi
2973 Deinterlace the input video ("yadif" means "yet another deinterlacing
2976 It accepts the following parameters:
2981 The interlacing mode to adopt. It accepts one of the following values:
2985 Output one frame for each frame.
2987 Output one frame for each field.
2989 Like 0, but it skips the spatial interlacing check.
2991 Like 1, but it skips the spatial interlacing check.
2994 The default value is 0.
2997 The picture field parity assumed for the input interlaced video. It accepts one
2998 of the following values:
3002 Assume the top field is first.
3004 Assume the bottom field is first.
3006 Enable automatic detection of field parity.
3009 The default value is -1.
3010 If the interlacing is unknown or the decoder does not export this information,
3011 top field first will be assumed.
3014 Whether the deinterlacer should trust the interlaced flag and only deinterlace
3015 frames marked as interlaced.
3019 Deinterlace all frames.
3021 Only deinterlace frames marked as interlaced.
3024 The default value is 0.
3028 @c man end VIDEO FILTERS
3030 @chapter Video Sources
3031 @c man begin VIDEO SOURCES
3033 Below is a description of the currently available video sources.
3037 Buffer video frames, and make them available to the filter chain.
3039 This source is mainly intended for a programmatic use, in particular
3040 through the interface defined in @file{libavfilter/vsrc_buffer.h}.
3042 It accepts the following parameters:
3047 The input video width.
3050 The input video height.
3053 The name of the input video pixel format.
3056 The time base used for input timestamps.
3059 The sample (pixel) aspect ratio of the input video.
3062 When using a hardware pixel format, this should be a reference to an
3063 AVHWFramesContext describing input frames.
3069 buffer=width=320:height=240:pix_fmt=yuv410p:time_base=1/24:sar=1
3072 will instruct the source to accept video frames with size 320x240 and
3073 with format "yuv410p", assuming 1/24 as the timestamps timebase and
3074 square pixels (1:1 sample aspect ratio).
3078 Provide an uniformly colored input.
3080 It accepts the following parameters:
3085 Specify the color of the source. It can be the name of a color (case
3086 insensitive match) or a 0xRRGGBB[AA] sequence, possibly followed by an
3087 alpha specifier. The default value is "black".
3090 Specify the size of the sourced video, it may be a string of the form
3091 @var{width}x@var{height}, or the name of a size abbreviation. The
3092 default value is "320x240".
3095 Specify the frame rate of the sourced video, as the number of frames
3096 generated per second. It has to be a string in the format
3097 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a floating point
3098 number or a valid video frame rate abbreviation. The default value is
3103 The following graph description will generate a red source
3104 with an opacity of 0.2, with size "qcif" and a frame rate of 10
3105 frames per second, which will be overlaid over the source connected
3106 to the pad with identifier "in":
3109 "color=red@@0.2:qcif:10 [color]; [in][color] overlay [out]"
3114 Read a video stream from a movie container.
3116 Note that this source is a hack that bypasses the standard input path. It can be
3117 useful in applications that do not support arbitrary filter graphs, but its use
3118 is discouraged in those that do. It should never be used with
3119 @command{avconv}; the @option{-filter_complex} option fully replaces it.
3121 It accepts the following parameters:
3126 The name of the resource to read (not necessarily a file; it can also be a
3127 device or a stream accessed through some protocol).
3129 @item format_name, f
3130 Specifies the format assumed for the movie to read, and can be either
3131 the name of a container or an input device. If not specified, the
3132 format is guessed from @var{movie_name} or by probing.
3134 @item seek_point, sp
3135 Specifies the seek point in seconds. The frames will be output
3136 starting from this seek point. The parameter is evaluated with
3137 @code{av_strtod}, so the numerical value may be suffixed by an IS
3138 postfix. The default value is "0".
3140 @item stream_index, si
3141 Specifies the index of the video stream to read. If the value is -1,
3142 the most suitable video stream will be automatically selected. The default
3147 It allows overlaying a second video on top of the main input of
3148 a filtergraph, as shown in this graph:
3150 input -----------> deltapts0 --> overlay --> output
3153 movie --> scale--> deltapts1 -------+
3158 # Skip 3.2 seconds from the start of the AVI file in.avi, and overlay it
3159 # on top of the input labelled "in"
3160 movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie];
3161 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
3163 # Read from a video4linux2 device, and overlay it on top of the input
3165 movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie];
3166 [in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out]
3172 Null video source: never return images. It is mainly useful as a
3173 template and to be employed in analysis / debugging tools.
3175 It accepts a string of the form
3176 @var{width}:@var{height}:@var{timebase} as an optional parameter.
3178 @var{width} and @var{height} specify the size of the configured
3179 source. The default values of @var{width} and @var{height} are
3180 respectively 352 and 288 (corresponding to the CIF size format).
3182 @var{timebase} specifies an arithmetic expression representing a
3183 timebase. The expression can contain the constants "PI", "E", "PHI", and
3184 "AVTB" (the default timebase), and defaults to the value "AVTB".
3188 Provide a frei0r source.
3190 To enable compilation of this filter you need to install the frei0r
3191 header and configure Libav with --enable-frei0r.
3193 This source accepts the following parameters:
3198 The size of the video to generate. It may be a string of the form
3199 @var{width}x@var{height} or a frame size abbreviation.
3202 The framerate of the generated video. It may be a string of the form
3203 @var{num}/@var{den} or a frame rate abbreviation.
3206 The name to the frei0r source to load. For more information regarding frei0r and
3207 how to set the parameters, read the @ref{frei0r} section in the video filters
3211 A '|'-separated list of parameters to pass to the frei0r source.
3217 # Generate a frei0r partik0l source with size 200x200 and framerate 10
3218 # which is overlaid on the overlay filter's main input
3219 frei0r_src=size=200x200:framerate=10:filter_name=partik0l:filter_params=1234 [overlay]; [in][overlay] overlay
3222 @section rgbtestsrc, testsrc
3224 The @code{rgbtestsrc} source generates an RGB test pattern useful for
3225 detecting RGB vs BGR issues. You should see a red, green and blue
3226 stripe from top to bottom.
3228 The @code{testsrc} source generates a test video pattern, showing a
3229 color pattern, a scrolling gradient and a timestamp. This is mainly
3230 intended for testing purposes.
3232 The sources accept the following parameters:
3237 Specify the size of the sourced video, it may be a string of the form
3238 @var{width}x@var{height}, or the name of a size abbreviation. The
3239 default value is "320x240".
3242 Specify the frame rate of the sourced video, as the number of frames
3243 generated per second. It has to be a string in the format
3244 @var{frame_rate_num}/@var{frame_rate_den}, an integer number, a floating point
3245 number or a valid video frame rate abbreviation. The default value is
3249 Set the sample aspect ratio of the sourced video.
3252 Set the video duration of the sourced video. The accepted syntax is:
3254 [-]HH[:MM[:SS[.m...]]]
3257 Also see the the @code{av_parse_time()} function.
3259 If not specified, or the expressed duration is negative, the video is
3260 supposed to be generated forever.
3263 For example the following:
3265 testsrc=duration=5.3:size=qcif:rate=10
3268 will generate a video with a duration of 5.3 seconds, with size
3269 176x144 and a framerate of 10 frames per second.
3271 @c man end VIDEO SOURCES
3273 @chapter Video Sinks
3274 @c man begin VIDEO SINKS
3276 Below is a description of the currently available video sinks.
3280 Buffer video frames, and make them available to the end of the filter
3283 This sink is intended for programmatic use through the interface defined in
3284 @file{libavfilter/buffersink.h}.
3288 Null video sink: do absolutely nothing with the input video. It is
3289 mainly useful as a template and for use in analysis / debugging
3292 @c man end VIDEO SINKS