Set LFO rate.
@end table
+@section acue
+
+Delay audio filtering until a given wallclock timestamp. See the @ref{cue}
+filter.
+
+@section adeclick
+Remove impulsive noise from input audio.
+
+Samples detected as impulsive noise are replaced by interpolated samples using
+autoregressive modelling.
+
+@table @option
+@item w
+Set window size, in milliseconds. Allowed range is from @code{10} to
+@code{100}. Default value is @code{55} milliseconds.
+This sets size of window which will be processed at once.
+
+@item o
+Set window overlap, in percentage of window size. Allowed range is from
+@code{50} to @code{95}. Default value is @code{75} percent.
+Setting this to a very high value increases impulsive noise removal but makes
+whole process much slower.
+
+@item a
+Set autoregression order, in percentage of window size. Allowed range is from
+@code{0} to @code{25}. Default value is @code{2} percent. This option also
+controls quality of interpolated samples using neighbour good samples.
+
+@item t
+Set threshold value. Allowed range is from @code{1} to @code{100}.
+Default value is @code{2}.
+This controls the strength of impulsive noise which is going to be removed.
+The lower value, the more samples will be detected as impulsive noise.
+
+@item b
+Set burst fusion, in percentage of window size. Allowed range is @code{0} to
+@code{10}. Default value is @code{2}.
+If any two samples deteced as noise are spaced less than this value then any
+sample inbetween those two samples will be also detected as noise.
+
+@item m
+Set overlap method.
+
+It accepts the following values:
+@table @option
+@item a
+Select overlap-add method. Even not interpolated samples are slightly
+changed with this method.
+
+@item s
+Select overlap-save method. Not interpolated samples remain unchanged.
+@end table
+
+Default value is @code{a}.
+@end table
+
+@section adeclip
+Remove clipped samples from input audio.
+
+Samples detected as clipped are replaced by interpolated samples using
+autoregressive modelling.
+
+@table @option
+@item w
+Set window size, in milliseconds. Allowed range is from @code{10} to @code{100}.
+Default value is @code{55} milliseconds.
+This sets size of window which will be processed at once.
+
+@item o
+Set window overlap, in percentage of window size. Allowed range is from @code{50}
+to @code{95}. Default value is @code{75} percent.
+
+@item a
+Set autoregression order, in percentage of window size. Allowed range is from
+@code{0} to @code{25}. Default value is @code{8} percent. This option also controls
+quality of interpolated samples using neighbour good samples.
+
+@item t
+Set threshold value. Allowed range is from @code{1} to @code{100}.
+Default value is @code{10}. Higher values make clip detection less aggressive.
+
+@item n
+Set size of histogram used to detect clips. Allowed range is from @code{100} to @code{9999}.
+Default value is @code{1000}. Higher values make clip detection less aggressive.
+
+@item m
+Set overlap method.
+
+It accepts the following values:
+@table @option
+@item a
+Select overlap-add method. Even not interpolated samples are slightly changed
+with this method.
+
+@item s
+Select overlap-save method. Not interpolated samples remain unchanged.
+@end table
+
+Default value is @code{a}.
+@end table
+
@section adelay
Delay one or more audio channels.
@end example
@end itemize
+@section aderivative, aintegral
+
+Compute derivative/integral of audio stream.
+
+Applying both filters one after another produces original audio.
+
@section aecho
Apply echoing to the input audio.
@item maxir
Set max allowed Impulse Response filter duration in seconds. Default is 30 seconds.
Allowed range is 0.1 to 60 seconds.
+
+@item response
+Show IR frequency reponse, magnitude and phase in additional video stream.
+By default it is disabled.
+
+@item channel
+Set for which IR channel to display frequency response. By default is first channel
+displayed. This option is used only when @var{response} is enabled.
+
+@item size
+Set video stream size. This option is used only when @var{response} is enabled.
@end table
@subsection Examples
16-bit integers
@end table
+@item response
+Show IR frequency reponse, magnitude and phase in additional video stream.
+By default it is disabled.
+
+@item channel
+Set for which IR channel to display frequency response. By default is first channel
+displayed. This option is used only when @var{response} is enabled.
+
+@item size
+Set video stream size. This option is used only when @var{response} is enabled.
@end table
Coefficients in @code{tf} format are separated by spaces and are in ascending
The filter accepts exactly one parameter, the audio tempo. If not
specified then the filter will assume nominal 1.0 tempo. Tempo must
-be in the [0.5, 2.0] range.
+be in the [0.5, 100.0] range.
+
+Note that tempo greater than 2 will skip some samples rather than
+blend them in. If for any reason this is a concern it is always
+possible to daisy-chain several instances of atempo to achieve the
+desired product tempo.
@subsection Examples
@end example
@item
-To speed up audio to 125% tempo:
+To speed up audio to 300% tempo:
+@example
+atempo=3
+@end example
+
+@item
+To speed up audio to 300% tempo by daisy-chaining two atempo instances:
@example
-atempo=1.25
+atempo=sqrt(3),atempo=sqrt(3)
@end example
@end itemize
@section rubberband
Apply time-stretching and pitch-shifting with librubberband.
+To enable compilation of this filter, you need to configure FFmpeg with
+@code{--enable-librubberband}.
+
The filter accepts the following options:
@table @option
this value will not alter source pixel. Default is 10.
Allowed range is from 0 to 65535.
+@item low
+Set lower limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535.
+This option controls maximum possible value that will decrease source pixel value.
+
+@item high
+Set high limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535.
+This option controls maximum possible value that will increase source pixel value.
+
@item planes
Set which planes to filter. Default is all. Allowed range is from 0 to 15.
@end table
@table @option
@item sizeX
-Set horizontal kernel size.
+Set horizontal radius size.
@item planes
Set which planes to filter. By default all planes are filtered.
@item sizeY
-Set vertical kernel size, if zero it will be same as @code{sizeX}.
+Set vertical radius size, if zero it will be same as @code{sizeX}.
Default is @code{0}.
@end table
@item SW
@item SH
-Width and height scale depending on the currently filtered plane. It is the
-ratio between the corresponding luma plane number of pixels and the current
-plane ones. E.g. for YUV4:2:0 the values are @code{1,1} for the luma plane, and
-@code{0.5,0.5} for chroma planes.
+Width and height scale for the plane being filtered. It is the
+ratio between the dimensions of the current plane to the luma plane,
+e.g. for a @code{yuv420p} frame, the values are @code{1,1} for
+the luma plane and @code{0.5,0.5} for the chroma planes.
@item T
Time of the current frame, expressed in seconds.
@section convolution
-Apply convolution 3x3, 5x5 or 7x7 filter.
+Apply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements.
The filter accepts the following options:
@item 2m
@item 3m
Set matrix for each plane.
-Matrix is sequence of 9, 25 or 49 signed integers.
+Matrix is sequence of 9, 25 or 49 signed integers in @var{square} mode,
+and from 1 to 49 odd number of signed integers in @var{row} mode.
@item 0rdiv
@item 1rdiv
@item 3bias
Set bias for each plane. This value is added to the result of the multiplication.
Useful for making the overall image brighter or darker. Default is 0.0.
+
+@item 0mode
+@item 1mode
+@item 2mode
+@item 3mode
+Set matrix mode for each plane. Can be @var{square}, @var{row} or @var{column}.
+Default is @var{square}.
@end table
@subsection Examples
playback.
@end table
+@anchor{cue}
+@section cue
+
+Delay video filtering until a given wallclock timestamp. The filter first
+passes on @option{preroll} amount of frames, then it buffers at most
+@option{buffer} amount of frames and waits for the cue. After reaching the cue
+it forwards the buffered frames and also any subsequent frames coming in its
+input.
+
+The filter can be used synchronize the output of multiple ffmpeg processes for
+realtime output devices like decklink. By putting the delay in the filtering
+chain and pre-buffering frames the process can pass on data to output almost
+immediately after the target wallclock timestamp is reached.
+
+Perfect frame accuracy cannot be guaranteed, but the result is good enough for
+some use cases.
+
+@table @option
+
+@item cue
+The cue timestamp expressed in a UNIX timestamp in microseconds. Default is 0.
+
+@item preroll
+The duration of content to pass on as preroll expressed in seconds. Default is 0.
+
+@item buffer
+The maximum duration of content to buffer before waiting for the cue expressed
+in seconds. Default is 0.
+
+@end table
+
@anchor{curves}
@section curves
The second argument is an offset added to the timestamp.
+If the format is set to @code{hms}, a third argument @code{24HH} may be
+supplied to present the hour part of the formatted timestamp in 24h format
+(00-23).
+
If the format is set to @code{localtime} or @code{gmtime},
a third argument may be supplied: a strftime() format string.
By default, @var{YYYY-MM-DD HH:MM:SS} format will be used.
@item colormix
Mix the colors to create a paint/cartoon effect.
-@end table
+@item canny
+Apply Canny edge detector on all selected planes.
+@end table
Default value is @var{wires}.
+
+@item planes
+Select planes for filtering. By default all available planes are filtered.
@end table
@subsection Examples
@end itemize
+@section fftdnoiz
+Denoise frames using 3D FFT (frequency domain filtering).
+
+The filter accepts the following options:
+
+@table @option
+@item sigma
+Set the noise sigma constant. This sets denoising strength.
+Default value is 1. Allowed range is from 0 to 30.
+Using very high sigma with low overlap may give blocking artifacts.
+
+@item amount
+Set amount of denoising. By default all detected noise is reduced.
+Default value is 1. Allowed range is from 0 to 1.
+
+@item block
+Set size of block, Default is 4, can be 3, 4, 5 or 6.
+Actual size of block in pixels is 2 to power of @var{block}, so by default
+block size in pixels is 2^4 which is 16.
+
+@item overlap
+Set block overlap. Default is 0.5. Allowed range is from 0.2 to 0.8.
+
+@item prev
+Set number of previous frames to use for denoising. By default is set to 0.
+
+@item next
+Set number of next frames to to use for denoising. By default is set to 0.
+
+@item planes
+Set planes which will be filtered, by default are all available filtered
+except alpha.
+@end table
+
@section field
Extract a single field from an interlaced image using stride
@end itemize
+@section greyedge
+A color constancy variation filter which estimates scene illumination via grey edge algorithm
+and corrects the scene colors accordingly.
+
+See: @url{https://staff.science.uva.nl/th.gevers/pub/GeversTIP07.pdf}
+
+The filter accepts the following options:
+
+@table @option
+@item difford
+The order of differentiation to be applied on the scene. Must be chosen in the range
+[0,2] and default value is 1.
+
+@item minknorm
+The Minkowski parameter to be used for calculating the Minkowski distance. Must
+be chosen in the range [0,20] and default value is 1. Set to 0 for getting
+max value instead of calculating Minkowski distance.
+
+@item sigma
+The standard deviation of Gaussian blur to be applied on the scene. Must be
+chosen in the range [0,1024.0] and default value = 1. floor( @var{sigma} * break_off_sigma(3) )
+can't be euqal to 0 if @var{difford} is greater than 0.
+@end table
+
+@subsection Examples
+@itemize
+
+@item
+Grey Edge:
+@example
+greyedge=difford=1:minknorm=5:sigma=2
+@end example
+
+@item
+Max Edge:
+@example
+greyedge=difford=1:minknorm=0:sigma=2
+@end example
+
+@end itemize
+
@anchor{haldclut}
@section haldclut
where @var{r_0} is halve of the image diagonal and @var{r_src} and @var{r_tgt} are the
distances from the focal point in the source and target images, respectively.
+@section lensfun
+
+Apply lens correction via the lensfun library (@url{http://lensfun.sourceforge.net/}).
+
+The @code{lensfun} filter requires the camera make, camera model, and lens model
+to apply the lens correction. The filter will load the lensfun database and
+query it to find the corresponding camera and lens entries in the database. As
+long as these entries can be found with the given options, the filter can
+perform corrections on frames. Note that incomplete strings will result in the
+filter choosing the best match with the given options, and the filter will
+output the chosen camera and lens models (logged with level "info"). You must
+provide the make, camera model, and lens model as they are required.
+
+The filter accepts the following options:
+
+@table @option
+@item make
+The make of the camera (for example, "Canon"). This option is required.
+
+@item model
+The model of the camera (for example, "Canon EOS 100D"). This option is
+required.
+
+@item lens_model
+The model of the lens (for example, "Canon EF-S 18-55mm f/3.5-5.6 IS STM"). This
+option is required.
+
+@item mode
+The type of correction to apply. The following values are valid options:
+
+@table @samp
+@item vignetting
+Enables fixing lens vignetting.
+
+@item geometry
+Enables fixing lens geometry. This is the default.
+
+@item subpixel
+Enables fixing chromatic aberrations.
+
+@item vig_geo
+Enables fixing lens vignetting and lens geometry.
+
+@item vig_subpixel
+Enables fixing lens vignetting and chromatic aberrations.
+
+@item distortion
+Enables fixing both lens geometry and chromatic aberrations.
+
+@item all
+Enables all possible corrections.
+
+@end table
+@item focal_length
+The focal length of the image/video (zoom; expected constant for video). For
+example, a 18--55mm lens has focal length range of [18--55], so a value in that
+range should be chosen when using that lens. Default 18.
+
+@item aperture
+The aperture of the image/video (expected constant for video). Note that
+aperture is only used for vignetting correction. Default 3.5.
+
+@item focus_distance
+The focus distance of the image/video (expected constant for video). Note that
+focus distance is only used for vignetting and only slightly affects the
+vignetting correction process. If unknown, leave it at the default value (which
+is 1000).
+
+@item target_geometry
+The target geometry of the output image/video. The following values are valid
+options:
+
+@table @samp
+@item rectilinear (default)
+@item fisheye
+@item panoramic
+@item equirectangular
+@item fisheye_orthographic
+@item fisheye_stereographic
+@item fisheye_equisolid
+@item fisheye_thoby
+@end table
+@item reverse
+Apply the reverse of image correction (instead of correcting distortion, apply
+it).
+
+@item interpolation
+The type of interpolation used when correcting distortion. The following values
+are valid options:
+
+@table @samp
+@item nearest
+@item linear (default)
+@item lanczos
+@end table
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Apply lens correction with make "Canon", camera model "Canon EOS 100D", and lens
+model "Canon EF-S 18-55mm f/3.5-5.6 IS STM" with focal length of "18" and
+aperture of "8.0".
+
+@example
+ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov
+@end example
+
+@item
+Apply the same as before, but only for the first 5 seconds of video.
+
+@example
+ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8:enable='lte(t\,5)' -c:v h264 -b:v 8000k output.mov
+@end example
+
+@end itemize
+
@section libvmaf
Obtain the VMAF (Video Multi-Method Assessment Fusion)
It requires Netflix's vmaf library (libvmaf) as a pre-requisite.
After installing the library it can be enabled using:
-@code{./configure --enable-libvmaf}.
+@code{./configure --enable-libvmaf --enable-version3}.
If no model path is specified it uses the default model: @code{vmaf_v0.6.1.pkl}.
The filter has following options:
@item pool
Set the pool method (mean, min or harmonic mean) to be used for computing vmaf.
+
+@item n_threads
+Set number of threads to be used when computing vmaf.
+
+@item n_subsample
+Set interval for frame subsampling used when computing vmaf.
+
+@item enable_conf_interval
+Enables confidence interval.
@end table
This filter also supports the @ref{framesync} options.
Set first frame of loop. Default is 0.
@end table
+@section lut1d
+
+Apply a 1D LUT to an input video.
+
+The filter accepts the following options:
+
+@table @option
+@item file
+Set the 1D LUT file name.
+
+Currently supported formats:
+@table @samp
+@item cube
+Iridas
+@end table
+
+@item interp
+Select interpolation mode.
+
+Available values are:
+
+@table @samp
+@item nearest
+Use values from the nearest defined point.
+@item linear
+Interpolate values using the linear interpolation.
+@item cubic
+Interpolate values using the cubic interpolation.
+@end table
+@end table
+
@anchor{lut3d}
@section lut3d
@section negate
-Negate input video.
+Negate (invert) the input video.
+
+It accepts the following option:
+
+@table @option
-It accepts an integer in input; if non-zero it negates the
-alpha component (if available). The default value in input is 0.
+@item negate_alpha
+With value 1, it negates the alpha component, if present. Default value is 0.
+@end table
@section nlmeans
@section ocr
Optical Character Recognition
-This filter uses Tesseract for optical character recognition.
+This filter uses Tesseract for optical character recognition. To enable
+compilation of this filter, you need to configure FFmpeg with
+@code{--enable-libtesseract}.
It accepts the following options:
@code{0} (not enabled).
@end table
+@section sr
+
+Scale the input by applying one of the super-resolution methods based on
+convolutional neural networks.
+
+Training scripts as well as scripts for model generation are provided in
+the repository at @url{https://github.com/HighVoltageRocknRoll/sr.git}.
+
+The filter accepts the following options:
+
+@table @option
+@item model
+Specify which super-resolution model to use. This option accepts the following values:
+
+@table @samp
+@item srcnn
+Super-Resolution Convolutional Neural Network model.
+See @url{https://arxiv.org/abs/1501.00092}.
+
+@item espcn
+Efficient Sub-Pixel Convolutional Neural Network model.
+See @url{https://arxiv.org/abs/1609.05158}.
+
+@end table
+
+Default value is @samp{srcnn}.
+
+@item dnn_backend
+Specify which DNN backend to use for model loading and execution. This option accepts
+the following values:
+
+@table @samp
+@item native
+Native implementation of DNN loading and execution.
+
+@item tensorflow
+TensorFlow backend. To enable this backend you
+need to install the TensorFlow for C library (see
+@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
+@code{--enable-libtensorflow}
+
+@end table
+
+Default value is @samp{native}.
+
+@item scale_factor
+Set scale factor for SRCNN model, for which custom model file was provided.
+Allowed values are @code{2}, @code{3} and @code{4}. Default value is @code{2}.
+Scale factor is necessary for SRCNN model, because it accepts input upscaled
+using bicubic upscaling with proper scale factor.
+
+@item model_filename
+Set path to model file specifying network architecture and its parameters.
+Note that different backends use different file formats. TensorFlow backend
+can load files for both formats, while native backend can load files for only
+its format.
+
+@end table
+
@anchor{subtitles}
@section subtitles
subtitles=video.mkv:si=1
@end example
-To make the subtitles stream from @file{sub.srt} appear in transparent green
+To make the subtitles stream from @file{sub.srt} appear in 80% transparent blue
@code{DejaVu Serif}, use:
@example
-subtitles=sub.srt:force_style='FontName=DejaVu Serif,PrimaryColour=&HAA00FF00'
+subtitles=sub.srt:force_style='FontName=DejaVu Serif,PrimaryColour=&HCCFF0000'
@end example
@section super2xsai
mapping from a lower range to a higher range.
@end table
+@anchor{transpose}
@section transpose
Transpose rows with columns in the input video and optionally flip it.
transpose=1:portrait
@end example
+@section transpose_npp
+
+Transpose rows with columns in the input video and optionally flip it.
+For more in depth examples see the @ref{transpose} video filter, which shares mostly the same options.
+
+It accepts the following parameters:
+
+@table @option
+
+@item dir
+Specify the transposition direction.
+
+Can assume the following values:
+@table @samp
+@item cclock_flip
+Rotate by 90 degrees counterclockwise and vertically flip. (default)
+
+@item clock
+Rotate by 90 degrees clockwise.
+
+@item cclock
+Rotate by 90 degrees counterclockwise.
+
+@item clock_flip
+Rotate by 90 degrees clockwise and vertically flip.
+@end table
+
+@item passthrough
+Do not apply the transposition if the input geometry matches the one
+specified by the specified value. It accepts the following values:
+@table @samp
+@item none
+Always apply transposition. (default)
+@item portrait
+Preserve portrait geometry (when @var{height} >= @var{width}).
+@item landscape
+Preserve landscape geometry (when @var{width} >= @var{height}).
+@end table
+
+@end table
+
@section trim
Trim the input so that the output contains one continuous subpart of the input.
@section xbr
Apply the xBR high-quality magnification filter which is designed for pixel
art. It follows a set of edge-detection rules, see
-@url{http://www.libretro.com/forums/viewtopic.php?f=6&t=134}.
+@url{https://forums.libretro.com/t/xbr-algorithm-tutorial/123}.
It accepts the following option:
@anchor{zscale}
@section zscale
Scale (resize) the input video, using the z.lib library:
-https://github.com/sekrit-twc/zimg.
+@url{https://github.com/sekrit-twc/zimg}. To enable compilation of this
+filter, you need to configure FFmpeg with @code{--enable-libzimg}.
The zscale filter forces the output display aspect ratio to be the same
as the input, by changing the output sample aspect ratio.
@anchor{color}
@anchor{haldclutsrc}
@anchor{nullsrc}
+@anchor{pal75bars}
+@anchor{pal100bars}
@anchor{rgbtestsrc}
@anchor{smptebars}
@anchor{smptehdbars}
@anchor{testsrc}
@anchor{testsrc2}
@anchor{yuvtestsrc}
-@section allrgb, allyuv, color, haldclutsrc, nullsrc, rgbtestsrc, smptebars, smptehdbars, testsrc, testsrc2, yuvtestsrc
+@section allrgb, allyuv, color, haldclutsrc, nullsrc, pal75bars, pal100bars, rgbtestsrc, smptebars, smptehdbars, testsrc, testsrc2, yuvtestsrc
The @code{allrgb} source returns frames of size 4096x4096 of all rgb colors.
mainly useful to be employed in analysis / debugging tools, or as the
source for filters which ignore the input data.
+The @code{pal75bars} source generates a color bars pattern, based on
+EBU PAL recommendations with 75% color levels.
+
+The @code{pal100bars} source generates a color bars pattern, based on
+EBU PAL recommendations with 100% color levels.
+
The @code{rgbtestsrc} source generates an RGB test pattern useful for
detecting RGB vs BGR issues. You should see a red, green and blue
stripe from top to bottom.
@section aphasemeter
-Convert input audio to a video output, displaying the audio phase.
+Measures phase of input audio, which is exported as metadata @code{lavfi.aphasemeter.phase},
+representing mean phase of current audio frame. A video output can also be produced and is
+enabled by default. The audio is passed through as first output.
-The filter accepts the following options:
+Audio will be rematrixed to stereo if it has a different channel layout. Phase value is in
+range @code{[-1, 1]} where @code{-1} means left and right channels are completely out of phase
+and @code{1} means channels are in phase.
+
+The filter accepts the following options, all related to its video output:
@table @option
@item rate, r
Enable video output. Default is enabled.
@end table
-The filter also exports the frame metadata @code{lavfi.aphasemeter.phase} which
-represents mean phase of current audio frame. Value is in range @code{[-1, 1]}.
-The @code{-1} means left and right channels are completely out of phase and
-@code{1} means channels are in phase.
-
@section avectorscope
Convert input audio to a video output, representing the audio vector
constants:
@table @option
-@item FRAME_RATE
+@item FRAME_RATE, FR
frame rate, only defined for constant frame-rate video
@item PTS