X-Git-Url: https://git.sesse.net/?a=blobdiff_plain;f=doc%2Ffilters.texi;h=e77256e0056f896ae25e1d86eb87109b044fcaf1;hb=6af2480aa62e96fbfa4f2f99b80280ce77dafafd;hp=e217f4e9061a5c85ce0367bc04056c1719c816cf;hpb=cf69ad35c5a7aab9cf056f957bfea484e849527c;p=ffmpeg diff --git a/doc/filters.texi b/doc/filters.texi index e217f4e9061..e77256e0056 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -14,12 +14,14 @@ number of input and output pads of the filter. A filter with no input pads is called a "source", a filter with no output pads is called a "sink". +@anchor{Filtergraph syntax} @section Filtergraph syntax -A filtergraph can be represented using a textual representation, which -is recognized by the @code{-vf} and @code{-af} options of the ff* -tools, and by the @code{av_parse_graph()} function defined in -@file{libavfilter/avfiltergraph}. +A filtergraph can be represented using a textual representation, which is +recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex} +options in @command{avconv} and @option{-vf} in @command{avplay}, and by the +@code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} function defined in +@file{libavfilter/avfiltergraph.h}. A filterchain consists of a sequence of connected filters, each one connected to the previous one in the sequence. A filterchain is @@ -76,6 +78,12 @@ In a complete filterchain all the unlabelled filter input and output pads must be connected. A filtergraph is considered valid if all the filter input and output pads of all the filterchains are connected. +Libavfilter will automatically insert scale filters where format +conversion is required. It is possible to specify swscale flags +for those automatically inserted scalers by prepending +@code{sws_flags=@var{flags};} +to the filtergraph description. + Follows a BNF description for the filtergraph syntax: @example @var{NAME} ::= sequence of alphanumeric characters and '_' @@ -84,7 +92,7 @@ Follows a BNF description for the filtergraph syntax: @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted) @var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}] @var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}] -@var{FILTERGRAPH} ::= @var{FILTERCHAIN} [;@var{FILTERGRAPH}] +@var{FILTERGRAPH} ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}] @end example @c man end FILTERGRAPH DESCRIPTION @@ -92,17 +100,221 @@ Follows a BNF description for the filtergraph syntax: @chapter Audio Filters @c man begin AUDIO FILTERS -When you configure your FFmpeg build, you can disable any of the +When you configure your Libav build, you can disable any of the existing filters using --disable-filters. The configure output will show the audio filters included in your build. Below is a description of the currently available audio filters. +@section aformat + +Convert the input audio to one of the specified formats. The framework will +negotiate the most appropriate format to minimize conversions. + +The filter accepts the following named parameters: +@table @option + +@item sample_fmts +A comma-separated list of requested sample formats. + +@item sample_rates +A comma-separated list of requested sample rates. + +@item channel_layouts +A comma-separated list of requested channel layouts. + +@end table + +If a parameter is omitted, all values are allowed. + +For example to force the output to either unsigned 8-bit or signed 16-bit stereo: +@example +aformat=sample_fmts\=u8\,s16:channel_layouts\=stereo +@end example + +@section amix + +Mixes multiple audio inputs into a single output. + +For example +@example +avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT +@end example +will mix 3 input audio streams to a single output with the same duration as the +first input and a dropout transition time of 3 seconds. + +The filter accepts the following named parameters: +@table @option + +@item inputs +Number of inputs. If unspecified, it defaults to 2. + +@item duration +How to determine the end-of-stream. +@table @option + +@item longest +Duration of longest input. (default) + +@item shortest +Duration of shortest input. + +@item first +Duration of first input. + +@end table + +@item dropout_transition +Transition time, in seconds, for volume renormalization when an input +stream ends. The default value is 2 seconds. + +@end table + @section anull Pass the audio source unchanged to the output. +@section asplit + +Split input audio into several identical outputs. + +The filter accepts a single parameter which specifies the number of outputs. If +unspecified, it defaults to 2. + +For example +@example +avconv -i INPUT -filter_complex asplit=5 OUTPUT +@end example +will create 5 copies of the input audio. + +@section asyncts +Synchronize audio data with timestamps by squeezing/stretching it and/or +dropping samples/adding silence when needed. + +The filter accepts the following named parameters: +@table @option + +@item compensate +Enable stretching/squeezing the data to make it match the timestamps. + +@item min_delta +Minimum difference between timestamps and audio data (in seconds) to trigger +adding/dropping samples. + +@item max_comp +Maximum compensation in samples per second. + +@item first_pts +Assume the first pts should be this value. +This allows for padding/trimming at the start of stream. By default, no +assumption is made about the first frame's expected pts, so no padding or +trimming is done. For example, this could be set to 0 to pad the beginning with +silence if an audio stream starts after the video stream. + +@end table + +@section channelsplit +Split each channel in input audio stream into a separate output stream. + +This filter accepts the following named parameters: +@table @option +@item channel_layout +Channel layout of the input stream. Default is "stereo". +@end table + +For example, assuming a stereo input MP3 file +@example +avconv -i in.mp3 -filter_complex channelsplit out.mkv +@end example +will create an output Matroska file with two audio streams, one containing only +the left channel and the other the right channel. + +To split a 5.1 WAV file into per-channel files +@example +avconv -i in.wav -filter_complex +'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]' +-map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]' +front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]' +side_right.wav +@end example + +@section channelmap +Remap input channels to new locations. + +This filter accepts the following named parameters: +@table @option +@item channel_layout +Channel layout of the output stream. + +@item map +Map channels from input to output. The argument is a comma-separated list of +mappings, each in the @code{@var{in_channel}-@var{out_channel}} or +@var{in_channel} form. @var{in_channel} can be either the name of the input +channel (e.g. FL for front left) or its index in the input channel layout. +@var{out_channel} is the name of the output channel or its index in the output +channel layout. If @var{out_channel} is not given then it is implicitly an +index, starting with zero and increasing by one for each mapping. +@end table + +If no mapping is present, the filter will implicitly map input channels to +output channels preserving index. + +For example, assuming a 5.1+downmix input MOV file +@example +avconv -i in.mov -filter 'channelmap=map=DL-FL\,DR-FR' out.wav +@end example +will create an output WAV file tagged as stereo from the downmix channels of +the input. + +To fix a 5.1 WAV improperly encoded in AAC's native channel order +@example +avconv -i in.wav -filter 'channelmap=1\,2\,0\,5\,3\,4:channel_layout=5.1' out.wav +@end example + +@section join +Join multiple input streams into one multi-channel stream. + +The filter accepts the following named parameters: +@table @option + +@item inputs +Number of input streams. Defaults to 2. + +@item channel_layout +Desired output channel layout. Defaults to stereo. + +@item map +Map channels from inputs to output. The argument is a comma-separated list of +mappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}} +form. @var{input_idx} is the 0-based index of the input stream. @var{in_channel} +can be either the name of the input channel (e.g. FL for front left) or its +index in the specified input stream. @var{out_channel} is the name of the output +channel. +@end table + +The filter will attempt to guess the mappings when those are not specified +explicitly. It does so by first trying to find an unused matching input channel +and if that fails it picks the first unused input channel. + +E.g. to join 3 inputs (with properly set channel layouts) +@example +avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT +@end example + +To build a 5.1 output from 6 single-channel streams: +@example +avconv -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex +'join=inputs=6:channel_layout=5.1:map=0.0-FL\,1.0-FR\,2.0-FC\,3.0-SL\,4.0-SR\,5.0-LFE' +out +@end example + +@section resample +Convert the audio sample format, sample rate and channel layout. This filter is +not meant to be used directly, it is inserted automatically by libavfilter +whenever conversion is needed. Use the @var{aformat} filter to force a specific +conversion. + @c man end AUDIO FILTERS @chapter Audio Sources @@ -137,6 +349,33 @@ anullsrc=48000:4 anullsrc=48000:mono @end example +@section abuffer +Buffer audio frames, and make them available to the filter chain. + +This source is not intended to be part of user-supplied graph descriptions but +for insertion by calling programs through the interface defined in +@file{libavfilter/buffersrc.h}. + +It accepts the following named parameters: +@table @option + +@item time_base +Timebase which will be used for timestamps of submitted frames. It must be +either a floating-point number or in @var{numerator}/@var{denominator} form. + +@item sample_rate +Audio sample rate. + +@item sample_fmt +Name of the sample format, as returned by @code{av_get_sample_fmt_name()}. + +@item channel_layout +Channel layout of the audio data, in the form that can be accepted by +@code{av_get_channel_layout()}. +@end table + +All the parameters need to be explicitly defined. + @c man end AUDIO SOURCES @chapter Audio Sinks @@ -150,12 +389,19 @@ Null audio sink, do absolutely nothing with the input audio. It is mainly useful as a template and to be employed in analysis / debugging tools. +@section abuffersink +This sink is intended for programmatic use. Frames that arrive on this sink can +be retrieved by the calling program using the interface defined in +@file{libavfilter/buffersink.h}. + +This filter accepts no parameters. + @c man end AUDIO SINKS @chapter Video Filters @c man begin VIDEO FILTERS -When you configure your FFmpeg build, you can disable any of the +When you configure your Libav build, you can disable any of the existing filters using --disable-filters. The configure output will show the video filters included in your build. @@ -183,6 +429,71 @@ threshold, and defaults to 98. @var{threshold} is the threshold below which a pixel value is considered black, and defaults to 32. +@section boxblur + +Apply boxblur algorithm to the input video. + +This filter accepts the parameters: +@var{luma_power}:@var{luma_radius}:@var{chroma_radius}:@var{chroma_power}:@var{alpha_radius}:@var{alpha_power} + +Chroma and alpha parameters are optional, if not specified they default +to the corresponding values set for @var{luma_radius} and +@var{luma_power}. + +@var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent +the radius in pixels of the box used for blurring the corresponding +input plane. They are expressions, and can contain the following +constants: +@table @option +@item w, h +the input width and height in pixels + +@item cw, ch +the input chroma image width and height in pixels + +@item hsub, vsub +horizontal and vertical chroma subsample values. For example for the +pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1. +@end table + +The radius must be a non-negative number, and must not be greater than +the value of the expression @code{min(w,h)/2} for the luma and alpha planes, +and of @code{min(cw,ch)/2} for the chroma planes. + +@var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent +how many times the boxblur filter is applied to the corresponding +plane. + +Some examples follow: + +@itemize + +@item +Apply a boxblur filter with luma, chroma, and alpha radius +set to 2: +@example +boxblur=2:1 +@end example + +@item +Set luma radius to 2, alpha and chroma radius to 0 +@example +boxblur=2:1:0:0:0:0 +@end example + +@item +Set luma and chroma radius to a fraction of the video dimension +@example +boxblur=min(h\,w)/10:1:min(cw\,ch)/10:1 +@end example + +@end itemize + +@section copy + +Copy the input source unchanged to the output. Mainly useful for +testing purposes. + @section crop Crop the input video to @var{out_w}:@var{out_h}:@var{x}:@var{y}. @@ -199,13 +510,13 @@ the computed values for @var{x} and @var{y}. They are evaluated for each new frame. @item in_w, in_h -the input width and heigth +the input width and height @item iw, ih same as @var{in_w} and @var{in_h} @item out_w, out_h -the output (cropped) width and heigth +the output (cropped) width and height @item ow, oh same as @var{out_w} and @var{out_h} @@ -261,7 +572,7 @@ crop=in_h # corner of the input image. crop=in_w-100:in_h-100:100:100 -# crop 10 pixels from the lefth and right borders, and 20 pixels from +# crop 10 pixels from the left and right borders, and 20 pixels from # the top and bottom borders "crop=in_w-2*10:in_h-2*20" @@ -274,7 +585,7 @@ crop=in_w-100:in_h-100:100:100 # trembling effect "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)" -# erratic camera effect depending on timestamp and position +# erratic camera effect depending on timestamp "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)" # set x depending on the value of y @@ -316,6 +627,58 @@ indicates never reset and return the largest area encountered during playback. @end table +@section delogo + +Suppress a TV station logo by a simple interpolation of the surrounding +pixels. Just set a rectangle covering the logo and watch it disappear +(and sometimes something even uglier appear - your mileage may vary). + +The filter accepts parameters as a string of the form +"@var{x}:@var{y}:@var{w}:@var{h}:@var{band}", or as a list of +@var{key}=@var{value} pairs, separated by ":". + +The description of the accepted parameters follows. + +@table @option + +@item x, y +Specify the top left corner coordinates of the logo. They must be +specified. + +@item w, h +Specify the width and height of the logo to clear. They must be +specified. + +@item band, t +Specify the thickness of the fuzzy edge of the rectangle (added to +@var{w} and @var{h}). The default value is 4. + +@item show +When set to 1, a green rectangle is drawn on the screen to simplify +finding the right @var{x}, @var{y}, @var{w}, @var{h} parameters, and +@var{band} is set to 4. The default value is 0. + +@end table + +Some examples follow. + +@itemize + +@item +Set a rectangle covering the area with top left corner coordinates 0,0 +and size 100x77, setting a band of size 10: +@example +delogo=0:0:100:77:10 +@end example + +@item +As the previous example, but use named options: +@example +delogo=x=0:y=0:w=100:h=77:band=10 +@end example + +@end itemize + @section drawbox Draw a colored box on the input image. @@ -348,6 +711,235 @@ drawbox drawbox=10:20:200:60:red@@0.5" @end example +@section drawtext + +Draw text string or text from specified file on top of video using the +libfreetype library. + +To enable compilation of this filter you need to configure Libav with +@code{--enable-libfreetype}. + +The filter also recognizes strftime() sequences in the provided text +and expands them accordingly. Check the documentation of strftime(). + +The filter accepts parameters as a list of @var{key}=@var{value} pairs, +separated by ":". + +The description of the accepted parameters follows. + +@table @option + +@item fontfile +The font file to be used for drawing text. Path must be included. +This parameter is mandatory. + +@item text +The text string to be drawn. The text must be a sequence of UTF-8 +encoded characters. +This parameter is mandatory if no file is specified with the parameter +@var{textfile}. + +@item textfile +A text file containing text to be drawn. The text must be a sequence +of UTF-8 encoded characters. + +This parameter is mandatory if no text string is specified with the +parameter @var{text}. + +If both text and textfile are specified, an error is thrown. + +@item x, y +The offsets where text will be drawn within the video frame. +Relative to the top/left border of the output image. +They accept expressions similar to the @ref{overlay} filter: +@table @option + +@item x, y +the computed values for @var{x} and @var{y}. They are evaluated for +each new frame. + +@item main_w, main_h +main input width and height + +@item W, H +same as @var{main_w} and @var{main_h} + +@item text_w, text_h +rendered text width and height + +@item w, h +same as @var{text_w} and @var{text_h} + +@item n +the number of frames processed, starting from 0 + +@item t +timestamp expressed in seconds, NAN if the input timestamp is unknown + +@end table + +The default value of @var{x} and @var{y} is 0. + +@item fontsize +The font size to be used for drawing text. +The default value of @var{fontsize} is 16. + +@item fontcolor +The color to be used for drawing fonts. +Either a string (e.g. "red") or in 0xRRGGBB[AA] format +(e.g. "0xff000033"), possibly followed by an alpha specifier. +The default value of @var{fontcolor} is "black". + +@item boxcolor +The color to be used for drawing box around text. +Either a string (e.g. "yellow") or in 0xRRGGBB[AA] format +(e.g. "0xff00ff"), possibly followed by an alpha specifier. +The default value of @var{boxcolor} is "white". + +@item box +Used to draw a box around text using background color. +Value should be either 1 (enable) or 0 (disable). +The default value of @var{box} is 0. + +@item shadowx, shadowy +The x and y offsets for the text shadow position with respect to the +position of the text. They can be either positive or negative +values. Default value for both is "0". + +@item shadowcolor +The color to be used for drawing a shadow behind the drawn text. It +can be a color name (e.g. "yellow") or a string in the 0xRRGGBB[AA] +form (e.g. "0xff00ff"), possibly followed by an alpha specifier. +The default value of @var{shadowcolor} is "black". + +@item ft_load_flags +Flags to be used for loading the fonts. + +The flags map the corresponding flags supported by libfreetype, and are +a combination of the following values: +@table @var +@item default +@item no_scale +@item no_hinting +@item render +@item no_bitmap +@item vertical_layout +@item force_autohint +@item crop_bitmap +@item pedantic +@item ignore_global_advance_width +@item no_recurse +@item ignore_transform +@item monochrome +@item linear_design +@item no_autohint +@item end table +@end table + +Default value is "render". + +For more information consult the documentation for the FT_LOAD_* +libfreetype flags. + +@item tabsize +The size in number of spaces to use for rendering the tab. +Default value is 4. + +@item fix_bounds +If true, check and fix text coords to avoid clipping. +@end table + +For example the command: +@example +drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'" +@end example + +will draw "Test Text" with font FreeSerif, using the default values +for the optional parameters. + +The command: +@example +drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\ + x=100: y=50: fontsize=24: fontcolor=yellow@@0.2: box=1: boxcolor=red@@0.2" +@end example + +will draw 'Test Text' with font FreeSerif of size 24 at position x=100 +and y=50 (counting from the top-left corner of the screen), text is +yellow with a red box around it. Both the text and the box have an +opacity of 20%. + +Note that the double quotes are not necessary if spaces are not used +within the parameter list. + +For more information about libfreetype, check: +@url{http://www.freetype.org/}. + +@section fade + +Apply fade-in/out effect to input video. + +It accepts the parameters: +@var{type}:@var{start_frame}:@var{nb_frames} + +@var{type} specifies if the effect type, can be either "in" for +fade-in, or "out" for a fade-out effect. + +@var{start_frame} specifies the number of the start frame for starting +to apply the fade effect. + +@var{nb_frames} specifies the number of frames for which the fade +effect has to last. At the end of the fade-in effect the output video +will have the same intensity as the input video, at the end of the +fade-out transition the output video will be completely black. + +A few usage examples follow, usable too as test scenarios. +@example +# fade in first 30 frames of video +fade=in:0:30 + +# fade out last 45 frames of a 200-frame video +fade=out:155:45 + +# fade in first 25 frames and fade out last 25 frames of a 1000-frame video +fade=in:0:25, fade=out:975:25 + +# make first 5 frames black, then fade in from frame 5-24 +fade=in:5:20 +@end example + +@section fieldorder + +Transform the field order of the input video. + +It accepts one parameter which specifies the required field order that +the input interlaced video will be transformed to. The parameter can +assume one of the following values: + +@table @option +@item 0 or bff +output bottom field first +@item 1 or tff +output top field first +@end table + +Default value is "tff". + +Transformation is achieved by shifting the picture content up or down +by one line, and filling the remaining line with appropriate picture content. +This method is consistent with most broadcast field order converters. + +If the input video is not flagged as being interlaced, or it is already +flagged as being of the required output field order then this filter does +not alter the incoming video. + +This filter is very useful when converting to or from PAL DV material, +which is bottom field first. + +For example: +@example +./avconv -i in.vob -vf "fieldorder=bff" out.dv +@end example + @section fifo Buffer input images and send them when they are requested. @@ -366,13 +958,27 @@ the next filter. The filter accepts a list of pixel format names, separated by ":", for example "yuv420p:monow:rgb24". -The following command: - +Some examples follow: @example -./ffmpeg -i in.avi -vf "format=yuv420p" out.avi +# convert the input video to the format "yuv420p" +format=yuv420p + +# convert the input video to any of the formats in the list +format=yuv420p:yuv444p:yuv410p @end example -will convert the input video to the format "yuv420p". +@section fps + +Convert the video to specified constant framerate by duplicating or dropping +frames as necessary. + +This filter accepts the following named parameters: +@table @option + +@item fps +Desired output framerate. + +@end table @anchor{frei0r} @section frei0r @@ -380,7 +986,7 @@ will convert the input video to the format "yuv420p". Apply a frei0r effect to the input video. To enable compilation of this filter you need to install the frei0r -header and configure FFmpeg with --enable-frei0r. +header and configure Libav with --enable-frei0r. The filter supports the syntax: @example @@ -432,6 +1038,10 @@ regions by truncation to 8bit colordepth. Interpolate the gradients that should go where the bands are, and dither them. +This filter is designed for playback only. Do not use it prior to +lossy compression, because compression tends to lose the dither and +bring back the bands. + The filter takes two optional parameters, separated by ':': @var{strength}:@var{radius} @@ -458,40 +1068,156 @@ gradfun=1.2 Flip the input video horizontally. -For example to horizontally flip the video in input with -@file{ffmpeg}: +For example to horizontally flip the input video with @command{avconv}: @example -ffmpeg -i in.avi -vf "hflip" out.avi +avconv -i in.avi -vf "hflip" out.avi @end example -@section hqdn3d +@section hqdn3d + +High precision/quality 3d denoise filter. This filter aims to reduce +image noise producing smooth images and making still images really +still. It should enhance compressibility. + +It accepts the following optional parameters: +@var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp} + +@table @option +@item luma_spatial +a non-negative float number which specifies spatial luma strength, +defaults to 4.0 + +@item chroma_spatial +a non-negative float number which specifies spatial chroma strength, +defaults to 3.0*@var{luma_spatial}/4.0 + +@item luma_tmp +a float number which specifies luma temporal strength, defaults to +6.0*@var{luma_spatial}/4.0 + +@item chroma_tmp +a float number which specifies chroma temporal strength, defaults to +@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial} +@end table + +@section lut, lutrgb, lutyuv + +Compute a look-up table for binding each pixel component input value +to an output value, and apply it to input video. + +@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb} +to an RGB input video. + +These filters accept in input a ":"-separated list of options, which +specify the expressions used for computing the lookup table for the +corresponding pixel component values. + +The @var{lut} filter requires either YUV or RGB pixel formats in +input, and accepts the options: +@table @option +@var{c0} (first pixel component) +@var{c1} (second pixel component) +@var{c2} (third pixel component) +@var{c3} (fourth pixel component, corresponds to the alpha component) +@end table + +The exact component associated to each option depends on the format in +input. + +The @var{lutrgb} filter requires RGB pixel formats in input, and +accepts the options: +@table @option +@var{r} (red component) +@var{g} (green component) +@var{b} (blue component) +@var{a} (alpha component) +@end table + +The @var{lutyuv} filter requires YUV pixel formats in input, and +accepts the options: +@table @option +@var{y} (Y/luminance component) +@var{u} (U/Cb component) +@var{v} (V/Cr component) +@var{a} (alpha component) +@end table + +The expressions can contain the following constants and functions: + +@table @option +@item E, PI, PHI +the corresponding mathematical approximated values for e +(euler number), pi (greek PI), PHI (golden ratio) + +@item w, h +the input width and height + +@item val +input value for the pixel component + +@item clipval +the input value clipped in the @var{minval}-@var{maxval} range + +@item maxval +maximum value for the pixel component + +@item minval +minimum value for the pixel component + +@item negval +the negated value for the pixel component value clipped in the +@var{minval}-@var{maxval} range , it corresponds to the expression +"maxval-clipval+minval" + +@item clip(val) +the computed value in @var{val} clipped in the +@var{minval}-@var{maxval} range + +@item gammaval(gamma) +the computed gamma correction value of the pixel component value +clipped in the @var{minval}-@var{maxval} range, corresponds to the +expression +"pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval" + +@end table + +All expressions default to "val". + +Some examples follow: +@example +# negate input video +lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val" +lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val" + +# the above is the same as +lutrgb="r=negval:g=negval:b=negval" +lutyuv="y=negval:u=negval:v=negval" + +# negate luminance +lutyuv=negval -High precision/quality 3d denoise filter. This filter aims to reduce -image noise producing smooth images and making still images really -still. It should enhance compressibility. +# remove chroma components, turns the video into a graytone image +lutyuv="u=128:v=128" -It accepts the following optional parameters: -@var{luma_spatial}:@var{chroma_spatial}:@var{luma_tmp}:@var{chroma_tmp} +# apply a luma burning effect +lutyuv="y=2*val" -@table @option -@item luma_spatial -a non-negative float number which specifies spatial luma strength, -defaults to 4.0 +# remove green and blue components +lutrgb="g=0:b=0" -@item chroma_spatial -a non-negative float number which specifies spatial chroma strength, -defaults to 3.0*@var{luma_spatial}/4.0 +# set a constant alpha channel value on input +format=rgba,lutrgb=a="maxval-minval/2" -@item luma_tmp -a float number which specifies luma temporal strength, defaults to -6.0*@var{luma_spatial}/4.0 +# correct luminance gamma by a 0.5 factor +lutyuv=y=gammaval(0.5) +@end example -@item chroma_tmp -a float number which specifies chroma temporal strength, defaults to -@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial} -@end table +@section negate + +Negate input video. -@section noformat +This filter accepts an integer in input, if non-zero it negates the +alpha component (if available). The default value in input is 0. Force libavfilter not to use any of the specified pixel formats for the input to the next filter. @@ -499,14 +1225,15 @@ input to the next filter. The filter accepts a list of pixel format names, separated by ":", for example "yuv420p:monow:rgb24". -The following command: - +Some examples follow: @example -./ffmpeg -i in.avi -vf "noformat=yuv420p, vflip" out.avi -@end example +# force libavfilter to use a format different from "yuv420p" for the +# input to the vflip filter +noformat=yuv420p,vflip -will make libavfilter use a format different from "yuv420p" for the -input to the vflip filter. +# convert the input video to any of the formats not contained in the list +noformat=yuv420p:yuv444p:yuv410p +@end example @section null @@ -517,7 +1244,7 @@ Pass the video source unchanged to the output. Apply video transform using libopencv. To enable this filter install libopencv library and headers and -configure FFmpeg with --enable-libopencv. +configure Libav with --enable-libopencv. The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}. @@ -527,11 +1254,66 @@ The filter takes the parameters: @var{filter_name}@{:=@}@var{filter_params}. filter. If not specified the default values are assumed. Refer to the official libopencv documentation for more precise -informations: +information: @url{http://opencv.willowgarage.com/documentation/c/image_filtering.html} Follows the list of supported libopencv filters. +@anchor{dilate} +@subsection dilate + +Dilate an image by using a specific structuring element. +This filter corresponds to the libopencv function @code{cvDilate}. + +It accepts the parameters: @var{struct_el}:@var{nb_iterations}. + +@var{struct_el} represents a structuring element, and has the syntax: +@var{cols}x@var{rows}+@var{anchor_x}x@var{anchor_y}/@var{shape} + +@var{cols} and @var{rows} represent the number of columns and rows of +the structuring element, @var{anchor_x} and @var{anchor_y} the anchor +point, and @var{shape} the shape for the structuring element, and +can be one of the values "rect", "cross", "ellipse", "custom". + +If the value for @var{shape} is "custom", it must be followed by a +string of the form "=@var{filename}". The file with name +@var{filename} is assumed to represent a binary image, with each +printable character corresponding to a bright pixel. When a custom +@var{shape} is used, @var{cols} and @var{rows} are ignored, the number +or columns and rows of the read file are assumed instead. + +The default value for @var{struct_el} is "3x3+0x0/rect". + +@var{nb_iterations} specifies the number of times the transform is +applied to the image, and defaults to 1. + +Follow some example: +@example +# use the default values +ocv=dilate + +# dilate using a structuring element with a 5x5 cross, iterate two times +ocv=dilate=5x5+2x2/cross:2 + +# read the shape from the file diamond.shape, iterate two times +# the file diamond.shape may contain a pattern of characters like this: +# * +# *** +# ***** +# *** +# * +# the specified cols and rows are ignored (but not the anchor point coordinates) +ocv=0x0+2x2/custom=diamond.shape:2 +@end example + +@subsection erode + +Erode an image by using a specific structuring element. +This filter corresponds to the libopencv function @code{cvErode}. + +The filter accepts the parameters: @var{struct_el}:@var{nb_iterations}, +with the same syntax and semantics as the @ref{dilate} filter. + @subsection smooth Smooth the input video. @@ -554,6 +1336,7 @@ other parameters is 0. These parameters correspond to the parameters assigned to the libopencv function @code{cvSmooth}. +@anchor{overlay} @section overlay Overlay one video on top of another. @@ -594,22 +1377,19 @@ Follow some examples: overlay=main_w-overlay_w-10:main_h-overlay_h-10 # insert a transparent PNG logo in the bottom left corner of the input -movie=0:png:logo.png [logo]; -[in][logo] overlay=10:main_h-overlay_h-10 [out] +avconv -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output # insert 2 different transparent PNG logos (second logo on bottom # right corner): -movie=0:png:logo1.png [logo1]; -movie=0:png:logo2.png [logo2]; -[in][logo1] overlay=10:H-h-10 [in+logo1]; -[in+logo1][logo2] overlay=W-w-10:H-h-10 [out] +avconv -i input -i logo1 -i logo2 -filter_complex +'overlay=10:H-h-10,overlay=W-w-10:H-h-10' output # add a transparent color layer on top of the main video, # WxH specifies the size of the main input to the overlay filter color=red@.3:WxH [over]; [in][over] overlay [out] @end example -You can chain togheter more overlays but the efficiency of such +You can chain together more overlays but the efficiency of such approach is yet to be tested. @section pad @@ -620,6 +1400,39 @@ given coordinates @var{x}, @var{y}. It accepts the following parameters: @var{width}:@var{height}:@var{x}:@var{y}:@var{color}. +The parameters @var{width}, @var{height}, @var{x}, and @var{y} are +expressions containing the following constants: + +@table @option +@item E, PI, PHI +the corresponding mathematical approximated values for e +(euler number), pi (greek PI), phi (golden ratio) + +@item in_w, in_h +the input video width and height + +@item iw, ih +same as @var{in_w} and @var{in_h} + +@item out_w, out_h +the output width and height, that is the size of the padded area as +specified by the @var{width} and @var{height} expressions + +@item ow, oh +same as @var{out_w} and @var{out_h} + +@item x, y +x and y offsets as specified by the @var{x} and @var{y} +expressions, or NAN if not yet specified + +@item a +input display aspect ratio, same as @var{iw} / @var{ih} + +@item hsub, vsub +horizontal and vertical chroma subsample values. For example for the +pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1. +@end table + Follows the description of the accepted parameters. @table @option @@ -629,6 +1442,9 @@ Specify the size of the output image with the paddings added. If the value for @var{width} or @var{height} is 0, the corresponding input size is used for the output. +The @var{width} expression can reference the value set by the +@var{height} expression, and vice versa. + The default value of @var{width} and @var{height} is 0. @item x, y @@ -636,6 +1452,9 @@ The default value of @var{width} and @var{height} is 0. Specify the offsets where to place the input image in the padded area with respect to the top/left border of the output image. +The @var{x} expression can reference the value set by the @var{y} +expression, and vice versa. + The default value of @var{x} and @var{y} is 0. @item color @@ -647,13 +1466,29 @@ The default value of @var{color} is "black". @end table -For example: +Some examples follow: @example # Add paddings with color "violet" to the input video. Output video # size is 640x480, the top-left corner of the input video is placed at -# row 0, column 40. +# column 0, row 40. pad=640:480:0:40:violet + +# pad the input to get an output with dimensions increased bt 3/2, +# and put the input video at the center of the padded area +pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2" + +# pad the input to get a squared output with size equal to the maximum +# value between the input width and height, and put the input video at +# the center of the padded area +pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2" + +# pad the input to get a final w/h ratio of 16:9 +pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2" + +# double output size and put the input video in the bottom-right +# corner of the output padded area +pad="2*iw:2*ih:ow-iw:oh-ih" @end example @section pixdesctest @@ -672,13 +1507,36 @@ can be used to test the monowhite pixel format descriptor definition. Scale the input video to @var{width}:@var{height} and/or convert the image format. -For example the command: +The parameters @var{width} and @var{height} are expressions containing +the following constants: -@example -./ffmpeg -i in.avi -vf "scale=200:100" out.avi -@end example +@table @option +@item E, PI, PHI +the corresponding mathematical approximated values for e +(euler number), pi (greek PI), phi (golden ratio) + +@item in_w, in_h +the input width and height + +@item iw, ih +same as @var{in_w} and @var{in_h} + +@item out_w, out_h +the output (cropped) width and height + +@item ow, oh +same as @var{out_w} and @var{out_h} -will scale the input video to a size of 200x100. +@item dar, a +input display aspect ratio, same as @var{iw} / @var{ih} + +@item sar +input sample aspect ratio + +@item hsub, vsub +horizontal and vertical chroma subsample values. For example for the +pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1. +@end table If the input image format is different from the format requested by the next filter, the scale filter will convert the input to the @@ -693,6 +1551,182 @@ ratio of the input image. The default value of @var{width} and @var{height} is 0. +Some examples follow: +@example +# scale the input video to a size of 200x100. +scale=200:100 + +# scale the input to 2x +scale=2*iw:2*ih +# the above is the same as +scale=2*in_w:2*in_h + +# scale the input to half size +scale=iw/2:ih/2 + +# increase the width, and set the height to the same size +scale=3/2*iw:ow + +# seek for Greek harmony +scale=iw:1/PHI*iw +scale=ih*PHI:ih + +# increase the height, and set the width to 3/2 of the height +scale=3/2*oh:3/5*ih + +# increase the size, but make the size a multiple of the chroma +scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub" + +# increase the width to a maximum of 500 pixels, keep the same input aspect ratio +scale='min(500\, iw*3/2):-1' +@end example + +@section select +Select frames to pass in output. + +It accepts in input an expression, which is evaluated for each input +frame. If the expression is evaluated to a non-zero value, the frame +is selected and passed to the output, otherwise it is discarded. + +The expression can contain the following constants: + +@table @option +@item PI +Greek PI + +@item PHI +golden ratio + +@item E +Euler number + +@item n +the sequential number of the filtered frame, starting from 0 + +@item selected_n +the sequential number of the selected frame, starting from 0 + +@item prev_selected_n +the sequential number of the last selected frame, NAN if undefined + +@item TB +timebase of the input timestamps + +@item pts +the PTS (Presentation TimeStamp) of the filtered video frame, +expressed in @var{TB} units, NAN if undefined + +@item t +the PTS (Presentation TimeStamp) of the filtered video frame, +expressed in seconds, NAN if undefined + +@item prev_pts +the PTS of the previously filtered video frame, NAN if undefined + +@item prev_selected_pts +the PTS of the last previously filtered video frame, NAN if undefined + +@item prev_selected_t +the PTS of the last previously selected video frame, NAN if undefined + +@item start_pts +the PTS of the first video frame in the video, NAN if undefined + +@item start_t +the time of the first video frame in the video, NAN if undefined + +@item pict_type +the type of the filtered frame, can assume one of the following +values: +@table @option +@item I +@item P +@item B +@item S +@item SI +@item SP +@item BI +@end table + +@item interlace_type +the frame interlace type, can assume one of the following values: +@table @option +@item PROGRESSIVE +the frame is progressive (not interlaced) +@item TOPFIRST +the frame is top-field-first +@item BOTTOMFIRST +the frame is bottom-field-first +@end table + +@item key +1 if the filtered frame is a key-frame, 0 otherwise + +@item pos +the position in the file of the filtered frame, -1 if the information +is not available (e.g. for synthetic video) +@end table + +The default value of the select expression is "1". + +Some examples follow: + +@example +# select all frames in input +select + +# the above is the same as: +select=1 + +# skip all frames: +select=0 + +# select only I-frames +select='eq(pict_type\,I)' + +# select one frame every 100 +select='not(mod(n\,100))' + +# select only frames contained in the 10-20 time interval +select='gte(t\,10)*lte(t\,20)' + +# select only I frames contained in the 10-20 time interval +select='gte(t\,10)*lte(t\,20)*eq(pict_type\,I)' + +# select frames with a minimum distance of 10 seconds +select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)' +@end example + +@anchor{setdar} +@section setdar + +Set the Display Aspect Ratio for the filter output video. + +This is done by changing the specified Sample (aka Pixel) Aspect +Ratio, according to the following equation: +@math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR} + +Keep in mind that this filter does not modify the pixel dimensions of +the video frame. Also the display aspect ratio set by this filter may +be changed by later filters in the filterchain, e.g. in case of +scaling or if another "setdar" or a "setsar" filter is applied. + +The filter accepts a parameter string which represents the wanted +display aspect ratio. +The parameter can be a floating point number string, or an expression +of the form @var{num}:@var{den}, where @var{num} and @var{den} are the +numerator and denominator of the aspect ratio. +If the parameter is not specified, it is assumed the value "0:1". + +For example to change the display aspect ratio to 16:9, specify: +@example +setdar=16:9 +# the above is equivalent to +setdar=1.77777 +@end example + +See also the @ref{setsar} filter documentation. + @section setpts Change the PTS (presentation timestamp) of the input video frames. @@ -753,6 +1787,32 @@ setpts=N/(25*TB) setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))' @end example +@anchor{setsar} +@section setsar + +Set the Sample (aka Pixel) Aspect Ratio for the filter output video. + +Note that as a consequence of the application of this filter, the +output display aspect ratio will change according to the following +equation: +@math{DAR = HORIZONTAL_RESOLUTION / VERTICAL_RESOLUTION * SAR} + +Keep in mind that the sample aspect ratio set by this filter may be +changed by later filters in the filterchain, e.g. if another "setsar" +or a "setdar" filter is applied. + +The filter accepts a parameter string which represents the wanted +sample aspect ratio. +The parameter can be a floating point number string, or an expression +of the form @var{num}:@var{den}, where @var{num} and @var{den} are the +numerator and denominator of the aspect ratio. +If the parameter is not specified, it is assumed the value "0:1". + +For example to change the sample aspect ratio to 10:11, specify: +@example +setsar=10:11 +@end example + @section settb Set the timebase to use for the output frames timestamps. @@ -783,13 +1843,72 @@ settb=2*intb settb=AVTB @end example +@section showinfo + +Show a line containing various information for each input video frame. +The input video is not modified. + +The shown line contains a sequence of key/value pairs of the form +@var{key}:@var{value}. + +A description of each shown parameter follows: + +@table @option +@item n +sequential number of the input frame, starting from 0 + +@item pts +Presentation TimeStamp of the input frame, expressed as a number of +time base units. The time base unit depends on the filter input pad. + +@item pts_time +Presentation TimeStamp of the input frame, expressed as a number of +seconds + +@item pos +position of the frame in the input stream, -1 if this information in +unavailable and/or meaningless (for example in case of synthetic video) + +@item fmt +pixel format name + +@item sar +sample aspect ratio of the input frame, expressed in the form +@var{num}/@var{den} + +@item s +size of the input frame, expressed in the form +@var{width}x@var{height} + +@item i +interlaced mode ("P" for "progressive", "T" for top field first, "B" +for bottom field first) + +@item iskey +1 if the frame is a key frame, 0 otherwise + +@item type +picture type of the input frame ("I" for an I-frame, "P" for a +P-frame, "B" for a B-frame, "?" for unknown type). +Check also the documentation of the @code{AVPictureType} enum and of +the @code{av_get_picture_type_char} function defined in +@file{libavutil/avutil.h}. + +@item checksum +Adler-32 checksum of all the planes of the input frame + +@item plane_checksum +Adler-32 checksum of each plane of the input frame, expressed in the form +"[@var{c0} @var{c1} @var{c2} @var{c3}]" +@end table + @section slicify Pass the images of input video on to next video filter as multiple slices. @example -./ffmpeg -i in.avi -vf "slicify=32" out.avi +./avconv -i in.avi -vf "slicify=32" out.avi @end example The filter accepts the slice height as parameter. If the parameter is @@ -798,6 +1917,19 @@ not specified it will use the default value of 16. Adding this in the beginning of filter chains should make filtering faster due to better use of the memory cache. +@section split + +Split input video into several identical outputs. + +The filter accepts a single parameter which specifies the number of outputs. If +unspecified, it defaults to 2. + +For example +@example +avconv -i INPUT -filter_complex split=5 OUTPUT +@end example +will create 5 copies of the input video. + @section transpose Transpose rows with columns in the input video and optionally flip it. @@ -848,7 +1980,7 @@ It accepts the following parameters: Negative values for the amount will blur the input video, while positive values will sharpen. All parameters are optional and default to the -equivalent of the string '5:5:1.0:0:0:0.0'. +equivalent of the string '5:5:1.0:5:5:0.0'. @table @option @@ -866,11 +1998,11 @@ and 5.0, default value is 1.0. @item chroma_msize_x Set the chroma matrix horizontal size. It can be an integer between 3 -and 13, default value is 0. +and 13, default value is 5. @item chroma_msize_y Set the chroma matrix vertical size. It can be an integer between 3 -and 13, default value is 0. +and 13, default value is 5. @item luma_amount Set the chroma effect strength. It can be a float number between -2.0 @@ -885,8 +2017,8 @@ unsharp=7:7:2.5 # Strong blur of both luma and chroma parameters unsharp=7:7:-2:7:7:-2 -# Use the default values with @command{ffmpeg} -./ffmpeg -i in.avi -vf "unsharp" out.mp4 +# Use the default values with @command{avconv} +./avconv -i in.avi -vf "unsharp" out.mp4 @end example @section vflip @@ -894,7 +2026,7 @@ unsharp=7:7:-2:7:7:-2 Flip the input video vertically. @example -./ffmpeg -i in.avi -vf "vflip" out.avi +./avconv -i in.avi -vf "vflip" out.avi @end example @section yadif @@ -902,7 +2034,7 @@ Flip the input video vertically. Deinterlace the input video ("yadif" means "yet another deinterlacing filter"). -It accepts the optional parameters: @var{mode}:@var{parity}. +It accepts the optional parameters: @var{mode}:@var{parity}:@var{auto}. @var{mode} specifies the interlacing mode to adopt, accepts one of the following values: @@ -925,14 +2057,28 @@ interlaced video, accepts one of the following values: @table @option @item 0 -assume bottom field first -@item 1 assume top field first +@item 1 +assume bottom field first @item -1 enable automatic detection @end table Default value is -1. +If interlacing is unknown or decoder does not export this information, +top field first will be assumed. + +@var{auto} specifies if deinterlacer should trust the interlaced flag +and only deinterlace frames marked as interlaced + +@table @option +@item 0 +deinterlace all frames +@item 1 +only deinterlace frames marked as interlaced +@end table + +Default value is 0. @c man end VIDEO FILTERS @@ -949,9 +2095,9 @@ This source is mainly intended for a programmatic use, in particular through the interface defined in @file{libavfilter/vsrc_buffer.h}. It accepts the following parameters: -@var{width}:@var{height}:@var{pix_fmt_string}:@var{timebase_num}:@var{timebase_den} +@var{width}:@var{height}:@var{pix_fmt_string}:@var{timebase_num}:@var{timebase_den}:@var{sample_aspect_ratio_num}:@var{sample_aspect_ratio.den} -All the parameters need to be explicitely defined. +All the parameters need to be explicitly defined. Follows the list of the accepted parameters. @@ -968,15 +2114,20 @@ name. @item timebase_num, timebase_den Specify numerator and denomitor of the timebase assumed by the timestamps of the buffered frames. + +@item sample_aspect_ratio.num, sample_aspect_ratio.den +Specify numerator and denominator of the sample aspect ratio assumed +by the video frames. @end table For example: @example -buffer=320:240:yuv410p:1:24 +buffer=320:240:yuv410p:1:24:1:1 @end example will instruct the source to accept video frames with size 320x240 and -with format "yuv410p" and assuming 1/24 as the timestamps timebase. +with format "yuv410p", assuming 1/24 as the timestamps timebase and +square pixels (1:1 sample aspect ratio). Since the pixel format with name "yuv410p" corresponds to the number 6 (check the enum PixelFormat definition in @file{libavutil/pixfmt.h}), this example corresponds to: @@ -1002,7 +2153,7 @@ alpha specifier. The default value is "black". @item frame_size Specify the size of the sourced video, it may be a string of the form -@var{width}x@var{heigth}, or the name of a size abbreviation. The +@var{width}x@var{height}, or the name of a size abbreviation. The default value is "320x240". @item frame_rate @@ -1023,6 +2174,66 @@ to the pad with identifier "in". "color=red@@0.2:qcif:10 [color]; [in][color] overlay [out]" @end example +@section movie + +Read a video stream from a movie container. + +Note that this source is a hack that bypasses the standard input path. It can be +useful in applications that do not support arbitrary filter graphs, but its use +is discouraged in those that do. Specifically in @command{avconv} this filter +should never be used, the @option{-filter_complex} option fully replaces it. + +It accepts the syntax: @var{movie_name}[:@var{options}] where +@var{movie_name} is the name of the resource to read (not necessarily +a file but also a device or a stream accessed through some protocol), +and @var{options} is an optional sequence of @var{key}=@var{value} +pairs, separated by ":". + +The description of the accepted options follows. + +@table @option + +@item format_name, f +Specifies the format assumed for the movie to read, and can be either +the name of a container or an input device. If not specified the +format is guessed from @var{movie_name} or by probing. + +@item seek_point, sp +Specifies the seek point in seconds, the frames will be output +starting from this seek point, the parameter is evaluated with +@code{av_strtod} so the numerical value may be suffixed by an IS +postfix. Default value is "0". + +@item stream_index, si +Specifies the index of the video stream to read. If the value is -1, +the best suited video stream will be automatically selected. Default +value is "-1". + +@end table + +This filter allows to overlay a second video on top of main input of +a filtergraph as shown in this graph: +@example +input -----------> deltapts0 --> overlay --> output + ^ + | +movie --> scale--> deltapts1 -------+ +@end example + +Some examples follow: +@example +# skip 3.2 seconds from the start of the avi file in.avi, and overlay it +# on top of the input labelled as "in". +movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [movie]; +[in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out] + +# read from a video4linux2 device, and overlay it on top of the input +# labelled as "in" +movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [movie]; +[in] setpts=PTS-STARTPTS, [movie] overlay=16:16 [out] + +@end example + @section nullsrc Null video source, never return images. It is mainly useful as a @@ -1044,7 +2255,7 @@ timebase. The expression can contain the constants "PI", "E", "PHI", Provide a frei0r source. To enable compilation of this filter you need to install the frei0r -header and configure FFmpeg with --enable-frei0r. +header and configure Libav with --enable-frei0r. The source supports the syntax: @example @@ -1057,8 +2268,7 @@ form @var{width}x@var{height} or a frame size abbreviation. the form @var{num}/@var{den} or a frame rate abbreviation. @var{src_name} is the name to the frei0r source to load. For more information regarding frei0r and how to set the parameters read the -section "frei0r" (@pxref{frei0r}) in the description of the video -filters. +section @ref{frei0r} in the description of the video filters. Some examples follow: @example @@ -1067,6 +2277,56 @@ Some examples follow: frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay @end example +@section rgbtestsrc, testsrc + +The @code{rgbtestsrc} source generates an RGB test pattern useful for +detecting RGB vs BGR issues. You should see a red, green and blue +stripe from top to bottom. + +The @code{testsrc} source generates a test video pattern, showing a +color pattern, a scrolling gradient and a timestamp. This is mainly +intended for testing purposes. + +Both sources accept an optional sequence of @var{key}=@var{value} pairs, +separated by ":". The description of the accepted options follows. + +@table @option + +@item size, s +Specify the size of the sourced video, it may be a string of the form +@var{width}x@var{height}, or the name of a size abbreviation. The +default value is "320x240". + +@item rate, r +Specify the frame rate of the sourced video, as the number of frames +generated per second. It has to be a string in the format +@var{frame_rate_num}/@var{frame_rate_den}, an integer number, a float +number or a valid video frame rate abbreviation. The default value is +"25". + +@item sar +Set the sample aspect ratio of the sourced video. + +@item duration +Set the video duration of the sourced video. The accepted syntax is: +@example +[-]HH[:MM[:SS[.m...]]] +[-]S+[.m...] +@end example +See also the function @code{av_parse_time()}. + +If not specified, or the expressed duration is negative, the video is +supposed to be generated forever. +@end table + +For example the following: +@example +testsrc=duration=5.3:size=qcif:rate=10 +@end example + +will generate a video with a duration of 5.3 seconds, with size +176x144 and a framerate of 10 frames per second. + @c man end VIDEO SOURCES @chapter Video Sinks @@ -1074,6 +2334,14 @@ frei0r_src=200x200:10:partik0l=1234 [overlay]; [in][overlay] overlay Below is a description of the currently available video sinks. +@section buffersink + +Buffer video frames, and make them available to the end of the filter +graph. + +This sink is intended for a programmatic use through the interface defined in +@file{libavfilter/buffersink.h}. + @section nullsink Null video sink, do absolutely nothing with the input video. It is @@ -1081,4 +2349,3 @@ mainly useful as a template and to be employed in analysis / debugging tools. @c man end VIDEO SINKS -