Range is between 0 and 1.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section acontrast
Simple audio dynamic range compression/expansion filter.
@anchor{afir}
@section afir
-Apply an arbitrary Frequency Impulse Response filter.
+Apply an arbitrary Finite Impulse Response filter.
This filter is designed for applying long FIR filters,
up to 60 seconds long.
room equalization, cross talk cancellation, wavefield synthesis,
auralization, ambiophonics, ambisonics and spatialization.
-This filter uses the second stream as FIR coefficients.
-If the second stream holds a single channel, it will be used
+This filter uses the streams higher than first one as FIR coefficients.
+If the non-first stream holds a single channel, it will be used
for all input channels in the first stream, otherwise
-the number of channels in the second stream must be same as
+the number of channels in the non-first stream must be same as
the number of channels in the first stream.
It accepts the following parameters:
@item minp
Set minimal partition size used for convolution. Default is @var{8192}.
-Allowed range is from @var{8} to @var{32768}.
+Allowed range is from @var{1} to @var{32768}.
Lower values decreases latency at cost of higher CPU usage.
@item maxp
Set maximal partition size used for convolution. Default is @var{8192}.
Allowed range is from @var{8} to @var{32768}.
Lower values may increase CPU usage.
+
+@item nbirs
+Set number of input impulse responses streams which will be switchable at runtime.
+Allowed range is from @var{1} to @var{32}. Default is @var{1}.
+
+@item ir
+Set IR stream which will be used for convolution, starting from @var{0}, should always be
+lower than supplied value by @code{nbirs} option. Default is @var{0}.
+This option can be changed at runtime via @ref{commands}.
@end table
@subsection Examples
It accepts the following parameters:
@table @option
-@item sample_fmts
+@item sample_fmts, f
A '|'-separated list of requested sample formats.
-@item sample_rates
+@item sample_rates, r
A '|'-separated list of requested sample rates.
-@item channel_layouts
+@item channel_layouts, cl
A '|'-separated list of requested channel layouts.
See @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
Set additional parameter which controls sigmoid function.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section asr
Automatic Speech Recognition
@end itemize
+@section axcorrelate
+Calculate normalized cross-correlation between two input audio streams.
+
+Resulted samples are always between -1 and 1 inclusive.
+If result is 1 it means two input samples are highly correlated in that selected segment.
+Result 0 means they are not correlated at all.
+If result is -1 it means two input samples are out of phase, which means they cancel each
+other.
+
+The filter accepts the following options:
+
+@table @option
+@item size
+Set size of segment over which cross-correlation is calculated.
+Default is 256. Allowed range is from 2 to 131072.
+
+@item algo
+Set algorithm for cross-correlation. Can be @code{slow} or @code{fast}.
+Default is @code{slow}. Fast algorithm assumes mean values over any given segment
+are always zero and thus need much less calculations to make.
+This is generally not true, but is valid for typical audio streams.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Calculate correlation between channels in stereo audio stream:
+@example
+ffmpeg -i stereo.wav -af channelsplit,axcorrelate=size=1024:algo=fast correlation.wav
+@end example
+@end itemize
+
@section bandpass
Apply a two-pole Butterworth band-pass filter with central
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
@item mix, m
How much to use filtered signal in output. Default is 1.
Range is between 0 and 1.
+
+@item channels, c
+Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@section bs2b
Enable clipping. By default is enabled.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section dcshift
Apply a DC shift to the audio.
frame.
In general, smaller parameters result in stronger compression, and vice versa.
Values below 3.0 are not recommended, because audible distortion may appear.
+
+@item threshold, t
+Set the target threshold value. This specifies the lowest permissible
+magnitude level for the audio input which will be normalized.
+If input frame volume is above this value frame will be normalized.
+Otherwise frame may not be normalized at all. The default value is set
+to 0, which means all input frames will be normalized.
+This option is mostly useful if digital noise is not wanted to be amplified.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section earwax
Make audio easier to listen to on headphones.
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Examples
Enable clipping. By default is enabled.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section firequalizer
Apply FIR Equalization using arbitrary frequency response.
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
EBU R128 loudness normalization. Includes both dynamic and linear normalization modes.
Support for both single pass (livestreams, files) and double pass (files) modes.
-This algorithm can target IL, LRA, and maximum true peak. To accurately detect true peaks,
-the audio stream will be upsampled to 192 kHz unless the normalization mode is linear.
+This algorithm can target IL, LRA, and maximum true peak. In dynamic mode, to accurately
+detect true peaks, the audio stream will be upsampled to 192 kHz.
Use the @code{-ar} option or @code{aresample} filter to explicitly set an output sample rate.
The filter accepts the following options:
Range is -99.0 - +99.0. Default is +0.0.
@item linear
-Normalize linearly if possible.
-measured_I, measured_LRA, measured_TP, and measured_thresh must also
-to be specified in order to use this mode.
-Options are true or false. Default is true.
+Normalize by linearly scaling the source audio.
+@code{measured_I}, @code{measured_LRA}, @code{measured_TP},
+and @code{measured_thresh} must all be specified. Target LRA shouldn't
+be lower than source LRA and the change in integrated loudness shouldn't
+result in a true peak which exceeds the target TP. If any of these
+conditions aren't met, normalization mode will revert to @var{dynamic}.
+Options are @code{true} or @code{false}. Default is @code{true}.
@item dual_mono
Treat mono input files as "dual-mono". If a mono file is intended for playback
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Examples
Range is between 0 and 1.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@subsection Examples
@itemize
Set level of input signal of original channel. Default is 0.8.
@end table
+@subsection Commands
+
+This filter supports the all above options except @code{delay} as @ref{commands}.
+
@section superequalizer
Apply 18 band equalizer.
@item channels, c
Specify which channels to filter, by default all available are filtered.
+
+@item normalize, n
+Normalize biquad coefficients, by default is disabled.
+Enabling it will normalize magnitude response at DC to 0dB.
@end table
@subsection Commands
Default value for @var{replaygain_preamp} is 0.0.
+@item replaygain_noclip
+Prevent clipping by limiting the gain applied.
+
+Default value for @var{replaygain_noclip} is 1.
+
@item eval
Set when the volume expression is evaluated.
If the specified expression is not valid, it is kept at its current
value.
-@item replaygain_noclip
-Prevent clipping by limiting the gain applied.
-
-Default value for @var{replaygain_noclip} is 1.
-
@end table
@subsection Examples
@end itemize
+@section afirsrc
+
+Generate a FIR coefficients using frequency sampling method.
+
+The resulting stream can be used with @ref{afir} filter for filtering the audio signal.
+
+The filter accepts the following options:
+
+@table @option
+@item taps, t
+Set number of filter coefficents in output audio stream.
+Default value is 1025.
+
+@item frequency, f
+Set frequency points from where magnitude and phase are set.
+This must be in non decreasing order, and first element must be 0, while last element
+must be 1. Elements are separated by white spaces.
+
+@item magnitude, m
+Set magnitude value for every frequency point set by @option{frequency}.
+Number of values must be same as number of frequency points.
+Values are separated by white spaces.
+
+@item phase, p
+Set phase value for every frequency point set by @option{frequency}.
+Number of values must be same as number of frequency points.
+Values are separated by white spaces.
+
+@item sample_rate, r
+Set sample rate, default is 44100.
+
+@item nb_samples, n
+Set number of samples per each frame. Default is 1024.
+
+@item win_func, w
+Set window function. Default is blackman.
+@end table
+
@section anullsrc
The null audio source, return unprocessed audio frames. It is mainly useful
@item color, colour, c
Specify the color of noise. Available noise colors are white, pink, brown,
-blue and violet. Default color is white.
+blue, violet and velvet. Default color is white.
@item seed, s
Specify a value used to seed the PRNG.
@end table
-@section blend, tblend
+@anchor{blend}
+@section blend
Blend two video frames into each other.
The default value is @code{all}.
@end table
+@section cas
+
+Apply Contrast Adaptive Sharpen filter to video stream.
+
+The filter accepts the following options:
+
+@table @option
+@item strength
+Set the sharpening strength. Default value is 0.
+
+@item planes
+Set planes to filter. Default value is to filter all
+planes except alpha plane.
+@end table
+
@section chromahold
Remove all color information for all colors except for certain one.
This can be used to pass exact YUV values as hexadecimal numbers.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section chromakey
YUV colorspace color/chroma keying.
This can be used to pass exact YUV values as hexadecimal numbers.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@subsection Examples
@itemize
Set edge mode, can be @var{smear}, default, or @var{warp}.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section ciescope
Display CIE color diagram with pixels overlaid onto it.
@end example
@end itemize
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section colorhold
Remove all color information for all RGB colors except for certain one.
Higher values result in more preserved color.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section colorlevels
Adjust video input frames using levels.
@end example
@end itemize
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section colormatrix
Convert color matrix.
@item opacity
Set background opacity.
+
+@item format
+Set display number format. Can be @code{hex}, or @code{dec}. Default is @code{hex}.
@end table
@section dctdnoiz
If 0, plane will remain unchanged.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section deflicker
Remove temporal frame luminance variations.
6 7 8
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section displace
Displace pixels as indicated by second and third input stream.
@section dnn_processing
-Do image processing with deep neural networks. Currently only AVFrame with RGB24
-and BGR24 are supported, more formats will be added later.
+Do image processing with deep neural networks. It works together with another filter
+which converts the pixel format of the Frame to what the dnn network requires.
The filter accepts the following options:
@item output
Set the output name of the dnn network.
-@item fmt
-Set the pixel format for the Frame. Allowed values are @code{AV_PIX_FMT_RGB24}, and @code{AV_PIX_FMT_BGR24}.
-Default value is @code{AV_PIX_FMT_RGB24}.
-
@end table
+@itemize
+@item
+Halve the red channle of the frame with format rgb24:
+@example
+ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
+@end example
+
+@item
+Halve the pixel value of the frame with format gray32f:
+@example
+ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
+@end example
+
+@end itemize
+
@section drawbox
Draw a colored box on the input image.
@ref{video size syntax,,"Video size" section in the ffmpeg-utils manual,ffmpeg-utils}.
The default value is @code{900x256}.
+@item rate, r
+Set the output frame rate. Default value is @code{25}.
+
The foreground color expressions can use the following variables:
@table @option
@item MIN
drawtext=fontfile=FreeSans.ttf:text=cow:fontsize=24:x=80:y=20+24-max_glyph_a
@end example
+@item
+Plot special @var{lavf.image2dec.source_basename} metadata onto each frame if
+such metadata exists. Otherwise, plot the string "NA". Note that image2 demuxer
+must have option @option{-export_path_metadata 1} for the special metadata fields
+to be available for filters.
+@example
+drawtext="fontsize=20:fontcolor=white:fontfile=FreeSans.ttf:text='%@{metadata\:lavf.image2dec.source_basename\:NA@}':x=10:y=10"
+@end example
+
@end itemize
For more information about libfreetype, check:
6 7 8
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section extractplanes
Extract color channel components from input video stream into
Set color for pixels in fixed mode. Default is @var{black}.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section find_rect
Find a rectangular object
Set freeze duration until notification (default is 2 seconds).
@end table
+@section freezeframes
+
+Freeze video frames.
+
+This filter freezes video frames using frame from 2nd input.
+
+The filter accepts the following options:
+
+@table @option
+@item first
+Set number of first frame from which to start freeze.
+
+@item last
+Set number of last frame from which to end freeze.
+
+@item replace
+Set number of frame from 2nd input which will be used instead of replaced frames.
+@end table
+
@anchor{frei0r}
@section frei0r
Return the value of the pixel at location (@var{x},@var{y}) of the alpha
plane. Return 0 if there is no such plane.
+@item psum(x,y), lumsum(x, y), cbsum(x,y), crsum(x,y), rsum(x,y), gsum(x,y), bsum(x,y), alphasum(x,y)
+Sum of sample values in the rectangle from (0,0) to (x,y), this allows obtaining
+sums of samples within a rectangle. See the functions without the sum postfix.
+
@item interpolation
Set one of interpolation methods:
@table @option
For functions, if @var{x} and @var{y} are outside the area, the value will be
automatically clipped to the closer edge.
+Please note that this filter can use multiple threads in which case each slice
+will have its own expression state. If you want to use only a single expression
+state because your expressions depend on previous state then you should limit
+the number of filter threads to 1.
+
@subsection Examples
@itemize
@code{strong}. It defaults to @code{none}.
@end table
+@anchor{histogram}
@section histogram
Compute and draw a color distribution histogram for the input video.
@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@anchor{hwdownload}
@section hwdownload
The device to upload to must be supplied when the filter is initialised. If
using ffmpeg, select the appropriate device with the @option{-filter_hw_device}
-option.
+option or with the @option{derive_device} option. The input and output devices
+must be of different types and compatible - the exact meaning of this is
+system-dependent, but typically it means that they must refer to the same
+underlying hardware context (for example, refer to the same graphics card).
+
+The following additional parameters are accepted:
+
+@table @option
+@item derive_device @var{type}
+Rather than using the device supplied at initialisation, instead derive a new
+device of type @var{type} from the device the input frames exist on.
+@end table
@anchor{hwupload_cuda}
@section hwupload_cuda
Swap luma/chroma/alpha fields. Exchange even & odd lines. Default value is @code{0}.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section inflate
Apply inflate effect to the video.
If 0, plane will remain unchanged.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section interlace
Simple interlacing filter from progressive contents. This interleaves upper (or
@item
Example with options and different containers:
@example
-ffmpeg -i main.mpg -i ref.mkv -lavfi "[0:v]settb=1/AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=1/AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=psnr=1:log_fmt=json" -f null -
+ffmpeg -i main.mpg -i ref.mkv -lavfi "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=psnr=1:log_fmt=json" -f null -
@end example
@end itemize
@item tolerance
Set the range of luma values to be keyed out.
-Default value is @code{0}.
+Default value is @code{0.01}.
@item softness
Set the range of softness. Default value is @code{0}.
Use this to control gradual transition from zero to full transparency.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section lut, lutrgb, lutyuv
Compute a look-up table for binding each pixel component input value
Set vertical radius size. Default value is @code{0}.
Allowed range is integer from 0 to 127.
If it is 0, value will be picked from horizontal @code{radius} option.
+
+@item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} will pick
+minimum values, and @code{1} maximum values.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section mergeplanes
Merge color channel components from several video streams.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options, excluding @var{smoothing} option.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@subsection Examples
Stretch video contrast to use the full dynamic range, with no temporal
Draw scope. By default is enabled.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@subsection Examples
@itemize
Set value which will be added to filtered result.
@end table
-@anchor{program_opencl}
-@section program_opencl
+@section pseudocolor
-Filter video using an OpenCL program.
+Alter frame colors in video with pseudocolors.
-@table @option
+This filter accepts the following options:
-@item source
-OpenCL program source file.
+@table @option
+@item c0
+set pixel first component expression
-@item kernel
-Kernel name in program.
-
-@item inputs
-Number of inputs to the filter. Defaults to 1.
-
-@item size, s
-Size of output frames. Defaults to the same as the first input.
-
-@end table
-
-The program source file must contain a kernel function with the given name,
-which will be run once for each plane of the output. Each run on a plane
-gets enqueued as a separate 2D global NDRange with one work-item for each
-pixel to be generated. The global ID offset for each work-item is therefore
-the coordinates of a pixel in the destination image.
-
-The kernel function needs to take the following arguments:
-@itemize
-@item
-Destination image, @var{__write_only image2d_t}.
-
-This image will become the output; the kernel should write all of it.
-@item
-Frame index, @var{unsigned int}.
-
-This is a counter starting from zero and increasing by one for each frame.
-@item
-Source images, @var{__read_only image2d_t}.
-
-These are the most recent images on each input. The kernel may read from
-them to generate the output, but they can't be written to.
-@end itemize
-
-Example programs:
-
-@itemize
-@item
-Copy the input to the output (output must be the same size as the input).
-@verbatim
-__kernel void copy(__write_only image2d_t destination,
- unsigned int index,
- __read_only image2d_t source)
-{
- const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
-
- int2 location = (int2)(get_global_id(0), get_global_id(1));
-
- float4 value = read_imagef(source, sampler, location);
-
- write_imagef(destination, location, value);
-}
-@end verbatim
-
-@item
-Apply a simple transformation, rotating the input by an amount increasing
-with the index counter. Pixel values are linearly interpolated by the
-sampler, and the output need not have the same dimensions as the input.
-@verbatim
-__kernel void rotate_image(__write_only image2d_t dst,
- unsigned int index,
- __read_only image2d_t src)
-{
- const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
- CLK_FILTER_LINEAR);
-
- float angle = (float)index / 100.0f;
-
- float2 dst_dim = convert_float2(get_image_dim(dst));
- float2 src_dim = convert_float2(get_image_dim(src));
-
- float2 dst_cen = dst_dim / 2.0f;
- float2 src_cen = src_dim / 2.0f;
-
- int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
-
- float2 dst_pos = convert_float2(dst_loc) - dst_cen;
- float2 src_pos = {
- cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
- sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
- };
- src_pos = src_pos * src_dim / dst_dim;
-
- float2 src_loc = src_pos + src_cen;
-
- if (src_loc.x < 0.0f || src_loc.y < 0.0f ||
- src_loc.x > src_dim.x || src_loc.y > src_dim.y)
- write_imagef(dst, dst_loc, 0.5f);
- else
- write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
-}
-@end verbatim
-
-@item
-Blend two inputs together, with the amount of each input used varying
-with the index counter.
-@verbatim
-__kernel void blend_images(__write_only image2d_t dst,
- unsigned int index,
- __read_only image2d_t src1,
- __read_only image2d_t src2)
-{
- const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
- CLK_FILTER_LINEAR);
-
- float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
-
- int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
- int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
- int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
-
- float4 val1 = read_imagef(src1, sampler, src1_loc);
- float4 val2 = read_imagef(src2, sampler, src2_loc);
-
- write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
-}
-@end verbatim
-
-@end itemize
-
-@section pseudocolor
-
-Alter frame colors in video with pseudocolors.
-
-This filter accepts the following options:
-
-@table @option
-@item c0
-set pixel first component expression
-
-@item c1
-set pixel second component expression
+@item c1
+set pixel second component expression
@item c2
set pixel third component expression
@item scan_max
Set the line to end scanning for EIA-608 data. Default is @code{29}.
-@item mac
-Set minimal acceptable amplitude change for sync codes detection.
-Default is @code{0.2}. Allowed range is @code{[0.001 - 1]}.
-
@item spw
Set the ratio of width reserved for sync code detection.
-Default is @code{0.27}. Allowed range is @code{[0.01 - 0.7]}.
-
-@item mhd
-Set the max peaks height difference for sync code detection.
-Default is @code{0.1}. Allowed range is @code{[0.0 - 0.5]}.
-
-@item mpd
-Set max peaks period difference for sync code detection.
-Default is @code{0.1}. Allowed range is @code{[0.0 - 0.5]}.
-
-@item msd
-Set the first two max start code bits differences.
-Default is @code{0.02}. Allowed range is @code{[0.0 - 0.5]}.
-
-@item bhd
-Set the minimum ratio of bits height compared to 3rd start code bit.
-Default is @code{0.75}. Allowed range is @code{[0.01 - 1]}.
-
-@item th_w
-Set the white color threshold. Default is @code{0.35}. Allowed range is @code{[0.1 - 1]}.
-
-@item th_b
-Set the black color threshold. Default is @code{0.15}. Allowed range is @code{[0.0 - 0.5]}.
+Default is @code{0.27}. Allowed range is @code{[0.1 - 0.7]}.
@item chp
Enable checking the parity bit. In the event of a parity error, the filter will output
@code{0x00} for that character. Default is false.
@item lp
-Lowpass lines prior to further processing. Default is disabled.
+Lowpass lines prior to further processing. Default is enabled.
@end table
@subsection Examples
@item format
Specify pixel format of output from this filter. Can be @code{color} or @code{gray}.
Default is @code{color}.
+
+@item fill
+Specify the color of the unmapped pixels. For the syntax of this option,
+check the @ref{color syntax,,"Color" section in the ffmpeg-utils
+manual,ffmpeg-utils}. Default color is @code{black}.
@end table
@section removegrain
Set edge mode, can be @var{smear}, default, or @var{warp}.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section roberts
Apply roberts cross operator to input video stream.
@item ovsub
horizontal and vertical output chroma subsample values. For example for the
pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+
+@item n
+The (sequential) number of the input frame, starting from 0.
+Only available with @code{eval=frame}.
+
+@item t
+The presentation timestamp of the input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+@item pos
+The position (byte offset) of the frame in the input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of synthetic video).
+Only available with @code{eval=frame}.
@end table
@subsection Examples
@item lanczos
@end table
+@item force_original_aspect_ratio
+Enable decreasing or increasing output video width or height if necessary to
+keep the original aspect ratio. Possible values:
+
+@table @samp
+@item disable
+Scale the video as specified and disable this feature.
+
+@item decrease
+The output video dimensions will automatically be decreased if needed.
+
+@item increase
+The output video dimensions will automatically be increased if needed.
+
+@end table
+
+One useful instance of this option is that when you know a specific device's
+maximum allowed resolution, you can use this to limit the output video to
+that, while retaining the aspect ratio. For example, device A allows
+1280x720 playback, and your video is 1920x800. Using this option (set it to
+decrease) and specifying 1280x720 to the command line makes the output
+1280x533.
+
+Please note that this is a different thing than specifying -1 for @option{w}
+or @option{h}, you still need to specify the output resolution for this option
+to work.
+
+@item force_divisible_by
+Ensures that both the output dimensions, width and height, are divisible by the
+given integer when used together with @option{force_original_aspect_ratio}. This
+works similar to using @code{-n} in the @option{w} and @option{h} options.
+
+This option respects the value set for @option{force_original_aspect_ratio},
+increasing or decreasing the resolution accordingly. The video's aspect ratio
+may be slightly modified.
+
+This option can be handy if you need to have a video fit within or exceed
+a defined resolution using @option{force_original_aspect_ratio} but also have
+encoder restrictions on width or height divisibility.
+
@end table
@section scale2ref
The main input video's horizontal and vertical chroma subsample values.
For example for the pixel format "yuv422p" @var{hsub} is 2 and @var{vsub}
is 1.
+
+@item main_n
+The (sequential) number of the main input frame, starting from 0.
+Only available with @code{eval=frame}.
+
+@item main_t
+The presentation timestamp of the main input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+@item main_pos
+The position (byte offset) of the frame in the main input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of synthetic video).
+Only available with @code{eval=frame}.
@end table
@subsection Examples
@end example
@end itemize
+@subsection Commands
+
+This filter supports the following commands:
+@table @option
+@item width, w
+@item height, h
+Set the output video dimension expression.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+@end table
+
@section scroll
Scroll input video horizontally and/or vertically by constant speed.
@item plane_checksum
The Adler-32 checksum (printed in hexadecimal) of each plane of the input frame,
expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
+
+@item mean
+The mean value of pixels in each plane of the input frame, expressed in the form
+"[@var{mean0} @var{mean1} @var{mean2} @var{mean3}]".
+
+@item stdev
+The standard deviation of pixel values in each plane of the input frame, expressed
+in the form "[@var{stdev0} @var{stdev1} @var{stdev2} @var{stdev3}]".
+
@end table
@section showpalette
@code{0} (not enabled).
@end table
+@subsection Commands
+
+This filter supports the following commands:
+@table @option
+@item quality, level
+Set quality level. The value @code{max} can be used to set the maximum level,
+currently @code{6}.
+@end table
+
@section sr
Scale the input by applying one of the super-resolution methods based on
@section swapuv
Swap U & V plane.
+@section tblend
+Blend successive video frames.
+
+See @ref{blend}
+
@section telecine
Apply telecine process to the video.
16p: 33333334
@end example
-@section threshold
+@section thistogram
-Apply threshold effect to video stream.
+Compute and draw a color distribution histogram for the input video across time.
-This filter needs four video streams to perform thresholding.
-First stream is stream we are filtering.
-Second stream is holding threshold values, third stream is holding min values,
-and last, fourth stream is holding max values.
+Unlike @ref{histogram} video filter which only shows histogram of single input frame
+at certain time, this filter shows also past histograms of number of frames defined
+by @code{width} option.
-The filter accepts the following option:
+The computed histogram is a representation of the color component
+distribution in an image.
+
+The filter accepts the following options:
@table @option
-@item planes
-Set which planes will be processed, unprocessed planes will be copied.
-By default value 0xf, all planes will be processed.
-@end table
+@item width, w
+Set width of single color component output. Default value is @code{0}.
+Value of @code{0} means width will be picked from input video.
+This also set number of passed histograms to keep.
+Allowed range is [0, 8192].
-For example if first stream pixel's component value is less then threshold value
-of pixel component from 2nd threshold stream, third stream value will picked,
-otherwise fourth stream pixel component value will be picked.
+@item display_mode, d
+Set display mode.
+It accepts the following values:
+@table @samp
+@item stack
+Per color component graphs are placed below each other.
-Using color source filter one can perform various types of thresholding:
+@item parade
+Per color component graphs are placed side by side.
-@subsection Examples
+@item overlay
+Presents information identical to that in the @code{parade}, except
+that the graphs representing color components are superimposed directly
+over one another.
+@end table
+Default is @code{stack}.
-@itemize
-@item
-Binary threshold, using gray color as threshold:
-@example
+@item levels_mode, m
+Set mode. Can be either @code{linear}, or @code{logarithmic}.
+Default is @code{linear}.
+
+@item components, c
+Set what color components to display.
+Default is @code{7}.
+
+@item bgopacity, b
+Set background opacity. Default is @code{0.9}.
+
+@item envelope, e
+Show envelope. Default is disabled.
+
+@item ecolor, ec
+Set envelope color. Default is @code{gold}.
+@end table
+
+@section threshold
+
+Apply threshold effect to video stream.
+
+This filter needs four video streams to perform thresholding.
+First stream is stream we are filtering.
+Second stream is holding threshold values, third stream is holding min values,
+and last, fourth stream is holding max values.
+
+The filter accepts the following option:
+
+@table @option
+@item planes
+Set which planes will be processed, unprocessed planes will be copied.
+By default value 0xf, all planes will be processed.
+@end table
+
+For example if first stream pixel's component value is less then threshold value
+of pixel component from 2nd threshold stream, third stream value will picked,
+otherwise fourth stream pixel component value will be picked.
+
+Using color source filter one can perform various types of thresholding:
+
+@subsection Examples
+
+@itemize
+@item
+Binary threshold, using gray color as threshold:
+@example
ffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=black -f lavfi -i color=white -lavfi threshold output.avi
@end example
This will slightly less reduce interlace 'twitter' and Moire
patterning but better retain detail and subjective sharpness impression.
+@item bypass_il
+Bypass already interlaced frames, only adjust the frame rate.
@end table
-Vertical low-pass filtering can only be enabled for @option{mode}
-@var{interleave_top} and @var{interleave_bottom}.
+Vertical low-pass filtering and bypassing already interlaced frames can only be
+enabled for @option{mode} @var{interleave_top} and @var{interleave_bottom}.
@end table
@item flat
@item gnomonic
@item rectilinear
-Regular video. @i{(output only)}
+Regular video.
Format specific options:
@table @option
@item h_fov
@item v_fov
@item d_fov
-Set horizontal/vertical/diagonal field of view. Values in degrees.
+Set output horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+
+@item ih_fov
+@item iv_fov
+@item id_fov
+Set input horizontal/vertical/diagonal field of view. Values in degrees.
If diagonal field of view is set it overrides horizontal and vertical field of view.
@end table
@item barrel
@item fb
-Facebook's 360 format.
+@item barrelsplit
+Facebook's 360 formats.
@item sg
Stereographic format.
@item h_fov
@item v_fov
@item d_fov
-Set horizontal/vertical/diagonal field of view. Values in degrees.
+Set output horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+
+@item ih_fov
+@item iv_fov
+@item id_fov
+Set input horizontal/vertical/diagonal field of view. Values in degrees.
If diagonal field of view is set it overrides horizontal and vertical field of view.
@end table
@item sinusoidal
Sinusoidal map projection format.
+@item fisheye
+Fisheye projection.
+
+Format specific options:
+@table @option
+@item h_fov
+@item v_fov
+@item d_fov
+Set output horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+
+@item ih_fov
+@item iv_fov
+@item id_fov
+Set input horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+@end table
+
+@item pannini
+Pannini projection. @i{(output only)}
+
+Format specific options:
+@table @option
+@item h_fov
+Set pannini parameter.
+@end table
+
+@item cylindrical
+Cylindrical projection.
+
+Format specific options:
+@table @option
+@item h_fov
+@item v_fov
+@item d_fov
+Set output horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+
+@item ih_fov
+@item iv_fov
+@item id_fov
+Set input horizontal/vertical/diagonal field of view. Values in degrees.
+
+If diagonal field of view is set it overrides horizontal and vertical field of view.
+@end table
+
+@item perspective
+Perspective projection. @i{(output only)}
+
+Format specific options:
+@table @option
+@item v_fov
+Set perspective parameter.
+@end table
+
+@item tetrahedron
+Tetrahedron projection.
@end table
@item interp
@item lanc
@item lanczos
Lanczos interpolation.
+@item sp16
+@item spline16
+Spline16 interpolation.
+@item gauss
+@item gaussian
+Gaussian interpolation.
@end table
Default value is @b{@samp{line}}.
@item out_trans
Set if output video needs to be transposed. Boolean value, by default disabled.
+@item alpha_mask
+Build mask in alpha plane for all unmapped pixels by marking them fully transparent. Boolean value, by default disabled.
@end table
@subsection Examples
@end example
@end itemize
+@subsection Commands
+
+This filter supports subset of above options as @ref{commands}.
+
@section vaguedenoiser
Apply a wavelet based denoiser.
It accepts the following values:
@table @samp
@item gray
+@item tint
Gray values are displayed on graph, higher brightness means more pixels have
same component color value on location in graph. This is the default mode.
@item none
@item green
@item color
+@item invert
@end table
@item opacity, o
@item 709
@end table
Default is auto.
+
+@item tint0, t0
+@item tint1, t1
+Set color tint for gray/tint vectorscope mode. By default both options are zero.
+This means no tint, and output will remain gray.
@end table
@anchor{vidstabdetect}
otherwise colors will be less saturated, more towards gray.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@anchor{vignette}
@section vignette
@section vmafmotion
-Obtain the average vmaf motion score of a video.
-It is one of the component filters of VMAF.
+Obtain the average VMAF motion score of a video.
+It is one of the component metrics of VMAF.
The obtained average motion score is printed through the logging system.
-In the below example the input file @file{ref.mpg} is being processed and score
-is computed.
+The filter accepts the following options:
+
+@table @option
+@item stats_file
+If specified, the filter will use the named file to save the motion score of
+each frame with respect to the previous frame.
+When filename equals "-" the data is sent to standard output.
+@end table
+Example:
@example
-ffmpeg -i ref.mpg -lavfi vmafmotion -f null -
+ffmpeg -i ref.mpg -vf vmafmotion -f null -
@end example
@section vstack
@item bgopacity, b
Set background opacity.
+
+@item tint0, t0
+@item tint1, t1
+Set tint for output.
+Only used with lowpass filter and when display is not overlay and input
+pixel formats are not RGB.
@end table
@section weave, doubleweave
Default is @code{3}.
@end table
+@section xfade
+
+Apply cross fade from one input video stream to another input video stream.
+The cross fade is applied for specified duration.
+
+The filter accepts the following options:
+
+@table @option
+@item transition
+Set one of available transition effects:
+
+@table @samp
+@item custom
+@item fade
+@item wipeleft
+@item wiperight
+@item wipeup
+@item wipedown
+@item slideleft
+@item slideright
+@item slideup
+@item slidedown
+@item circlecrop
+@item rectcrop
+@item distance
+@item fadeblack
+@item fadewhite
+@item radial
+@item smoothleft
+@item smoothright
+@item smoothup
+@item smoothdown
+@item circleopen
+@item circleclose
+@item vertopen
+@item vertclose
+@item horzopen
+@item horzclose
+@item dissolve
+@item pixelize
+@item diagtl
+@item diagtr
+@item diagbl
+@item diagbr
+@end table
+Default transition effect is fade.
+
+@item duration
+Set cross fade duration in seconds.
+Default duration is 1 second.
+
+@item offset
+Set cross fade start relative to first input stream in seconds.
+Default offset is 0.
+
+@item expr
+Set expression for custom transition effect.
+
+The expressions can use the following variables and functions:
+
+@table @option
+@item X
+@item Y
+The coordinates of the current sample.
+
+@item W
+@item H
+The width and height of the image.
+
+@item P
+Progress of transition effect.
+
+@item PLANE
+Currently processed plane.
+
+@item A
+Return value of first input at current location and plane.
+
+@item B
+Return value of second input at current location and plane.
+
+@item a0(x, y)
+@item a1(x, y)
+@item a2(x, y)
+@item a3(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+first/second/third/fourth component of first input.
+
+@item b0(x, y)
+@item b1(x, y)
+@item b2(x, y)
+@item b3(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+first/second/third/fourth component of second input.
+@end table
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Cross fade from one input video to another input video, with fade transition and duration of transition
+of 2 seconds starting at offset of 5 seconds:
+@example
+ffmpeg -i first.mp4 -i second.mp4 -filter_complex xfade=transition=fade:duration=2:offset=5 output.mp4
+@end example
+@end itemize
+
@section xmedian
Pick median pixels from several input videos.
@item planes
Set which planes to filter. Default value is @code{15}, by which all planes are processed.
+
+@item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} will pick
+minimum values, and @code{1} maximum values.
@end table
@section xstack
@item shortest
If set to 1, force the output to terminate when the shortest input
terminates. Default value is 0.
+
+@item fill
+If set to valid color, all unused pixels will be filled with that color.
+By default fill is set to none, so it is disabled.
@end table
@subsection Examples
The default value is @code{all}.
@end table
+@section yaepblur
+
+Apply blur filter while preserving edges ("yaepblur" means "yet another edge preserving blur filter").
+The algorithm is described in
+"J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
+
+It accepts the following parameters:
+
+@table @option
+@item radius, r
+Set the window radius. Default value is 3.
+
+@item planes, p
+Set which planes to filter. Default is only the first plane.
+
+@item sigma, s
+Set blur strength. Default value is 128.
+@end table
+
+@subsection Commands
+This filter supports same @ref{commands} as options.
+
@section zoompan
Apply Zoom & Pan effect.
pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
@end table
+@subsection Commands
+
+This filter supports the following commands:
@table @option
+@item width, w
+@item height, h
+Set the output video dimension expression.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
@end table
@c man end VIDEO FILTERS
@end example
@end itemize
+@section colorkey_opencl
+RGB colorspace color keying.
+
+The filter accepts the following options:
+
+@table @option
+@item color
+The color which will be replaced with transparency.
+
+@item similarity
+Similarity percentage with the key color.
+
+0.01 matches only the exact key color, while 1.0 matches everything.
+
+@item blend
+Blend percentage.
+
+0.0 makes pixels either fully transparent, or not transparent at all.
+
+Higher values result in semi-transparent pixels, with a higher transparency
+the more similar the pixels color is to the key color.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Make every semi-green pixel in the input transparent with some slight blending:
+@example
+-i INPUT -vf "hwupload, colorkey_opencl=green:0.3:0.1, hwdownload" OUTPUT
+@end example
+@end itemize
+
@section convolution_opencl
Apply convolution of 3x3, 5x5, 7x7 matrix.
@end example
@end itemize
-@section dilation_opencl
+@section erosion_opencl
-Apply dilation effect to the video.
+Apply erosion effect to the video.
-This filter replaces the pixel by the local(3x3) maximum.
+This filter replaces the pixel by the local(3x3) minimum.
It accepts the following options:
@itemize
@item
-Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.
+Apply erosion filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local minimum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local minimum is more then threshold of the corresponding plane, output pixel will be set to input pixel - threshold of corresponding plane.
@example
--i INPUT -vf "hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
+-i INPUT -vf "hwupload, erosion_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
@end example
@end itemize
-@section erosion_opencl
+@section deshake_opencl
+Feature-point based video stabilization filter.
-Apply erosion effect to the video.
+The filter accepts the following options:
-This filter replaces the pixel by the local(3x3) minimum.
+@table @option
+@item tripod
+Simulates a tripod by preventing any camera movement whatsoever from the original frame. Defaults to @code{0}.
-It accepts the following options:
+@item debug
+Whether or not additional debug info should be displayed, both in the processed output and in the console.
-@table @option
-@item threshold0
-@item threshold1
-@item threshold2
-@item threshold3
-Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.
-If @code{0}, plane will remain unchanged.
+Note that in order to see console debug output you will also need to pass @code{-v verbose} to ffmpeg.
-@item coordinates
-Flag which specifies the pixel to refer to.
-Range is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.
+Viewing point matches in the output video is only supported for RGB input.
-Flags to local 3x3 coordinates region centered on @code{x}:
+Defaults to @code{0}.
- 1 2 3
+@item adaptive_crop
+Whether or not to do a tiny bit of cropping at the borders to cut down on the amount of mirrored pixels.
- 4 x 5
+Defaults to @code{1}.
- 6 7 8
-@end table
+@item refine_features
+Whether or not feature points should be refined at a sub-pixel level.
-@subsection Example
-
-@itemize
-@item
-Apply erosion filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local minimum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local minimum is more then threshold of the corresponding plane, output pixel will be set to input pixel - threshold of corresponding plane.
-@example
--i INPUT -vf "hwupload, erosion_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
-@end example
-@end itemize
-
-@section colorkey_opencl
-RGB colorspace color keying.
-
-The filter accepts the following options:
-
-@table @option
-@item color
-The color which will be replaced with transparency.
-
-@item similarity
-Similarity percentage with the key color.
-
-0.01 matches only the exact key color, while 1.0 matches everything.
-
-@item blend
-Blend percentage.
-
-0.0 makes pixels either fully transparent, or not transparent at all.
-
-Higher values result in semi-transparent pixels, with a higher transparency
-the more similar the pixels color is to the key color.
-@end table
-
-@subsection Examples
-
-@itemize
-@item
-Make every semi-green pixel in the input transparent with some slight blending:
-@example
--i INPUT -vf "hwupload, colorkey_opencl=green:0.3:0.1, hwdownload" OUTPUT
-@end example
-@end itemize
-
-@section deshake_opencl
-Feature-point based video stabilization filter.
-
-The filter accepts the following options:
-
-@table @option
-@item tripod
-Simulates a tripod by preventing any camera movement whatsoever from the original frame. Defaults to @code{0}.
-
-@item debug
-Whether or not additional debug info should be displayed, both in the processed output and in the console.
-
-Note that in order to see console debug output you will also need to pass @code{-v verbose} to ffmpeg.
-
-Viewing point matches in the output video is only supported for RGB input.
-
-Defaults to @code{0}.
-
-@item adaptive_crop
-Whether or not to do a tiny bit of cropping at the borders to cut down on the amount of mirrored pixels.
-
-Defaults to @code{1}.
-
-@item refine_features
-Whether or not feature points should be refined at a sub-pixel level.
-
-This can be turned off for a slight performance gain at the cost of precision.
+This can be turned off for a slight performance gain at the cost of precision.
Defaults to @code{1}.
@end example
@end itemize
+@section dilation_opencl
+
+Apply dilation effect to the video.
+
+This filter replaces the pixel by the local(3x3) maximum.
+
+It accepts the following options:
+
+@table @option
+@item threshold0
+@item threshold1
+@item threshold2
+@item threshold3
+Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.
+If @code{0}, plane will remain unchanged.
+
+@item coordinates
+Flag which specifies the pixel to refer to.
+Range is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.
+
+Flags to local 3x3 coordinates region centered on @code{x}:
+
+ 1 2 3
+
+ 4 x 5
+
+ 6 7 8
+@end table
+
+@subsection Example
+
+@itemize
+@item
+Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.
+@example
+-i INPUT -vf "hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
+@end example
+@end itemize
+
@section nlmeans_opencl
Non-local Means denoise filter through OpenCL, this filter accepts same options as @ref{nlmeans}.
@end itemize
+@section pad_opencl
+
+Add paddings to the input image, and place the original input at the
+provided @var{x}, @var{y} coordinates.
+
+It accepts the following options:
+
+@table @option
+@item width, w
+@item height, h
+Specify an expression for the size of the output image with the
+paddings added. If the value for @var{width} or @var{height} is 0, the
+corresponding input size is used for the output.
+
+The @var{width} expression can reference the value set by the
+@var{height} expression, and vice versa.
+
+The default value of @var{width} and @var{height} is 0.
+
+@item x
+@item y
+Specify the offsets to place the input image at within the padded area,
+with respect to the top/left border of the output image.
+
+The @var{x} expression can reference the value set by the @var{y}
+expression, and vice versa.
+
+The default value of @var{x} and @var{y} is 0.
+
+If @var{x} or @var{y} evaluate to a negative number, they'll be changed
+so the input image is centered on the padded area.
+
+@item color
+Specify the color of the padded area. For the syntax of this option,
+check the @ref{color syntax,,"Color" section in the ffmpeg-utils
+manual,ffmpeg-utils}.
+
+@item aspect
+Pad to an aspect instead to a resolution.
+@end table
+
+The value for the @var{width}, @var{height}, @var{x}, and @var{y}
+options are expressions containing the following constants:
+
+@table @option
+@item in_w
+@item in_h
+The input video width and height.
+
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
+
+@item out_w
+@item out_h
+The output width and height (the size of the padded area), as
+specified by the @var{width} and @var{height} expressions.
+
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}.
+
+@item x
+@item y
+The x and y offsets as specified by the @var{x} and @var{y}
+expressions, or NAN if not yet specified.
+
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+@end table
+
@section prewitt_opencl
Apply the Prewitt operator (@url{https://en.wikipedia.org/wiki/Prewitt_operator}) to input video stream.
@end example
@end itemize
+@anchor{program_opencl}
+@section program_opencl
+
+Filter video using an OpenCL program.
+
+@table @option
+
+@item source
+OpenCL program source file.
+
+@item kernel
+Kernel name in program.
+
+@item inputs
+Number of inputs to the filter. Defaults to 1.
+
+@item size, s
+Size of output frames. Defaults to the same as the first input.
+
+@end table
+
+The program source file must contain a kernel function with the given name,
+which will be run once for each plane of the output. Each run on a plane
+gets enqueued as a separate 2D global NDRange with one work-item for each
+pixel to be generated. The global ID offset for each work-item is therefore
+the coordinates of a pixel in the destination image.
+
+The kernel function needs to take the following arguments:
+@itemize
+@item
+Destination image, @var{__write_only image2d_t}.
+
+This image will become the output; the kernel should write all of it.
+@item
+Frame index, @var{unsigned int}.
+
+This is a counter starting from zero and increasing by one for each frame.
+@item
+Source images, @var{__read_only image2d_t}.
+
+These are the most recent images on each input. The kernel may read from
+them to generate the output, but they can't be written to.
+@end itemize
+
+Example programs:
+
+@itemize
+@item
+Copy the input to the output (output must be the same size as the input).
+@verbatim
+__kernel void copy(__write_only image2d_t destination,
+ unsigned int index,
+ __read_only image2d_t source)
+{
+ const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
+
+ int2 location = (int2)(get_global_id(0), get_global_id(1));
+
+ float4 value = read_imagef(source, sampler, location);
+
+ write_imagef(destination, location, value);
+}
+@end verbatim
+
+@item
+Apply a simple transformation, rotating the input by an amount increasing
+with the index counter. Pixel values are linearly interpolated by the
+sampler, and the output need not have the same dimensions as the input.
+@verbatim
+__kernel void rotate_image(__write_only image2d_t dst,
+ unsigned int index,
+ __read_only image2d_t src)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+
+ float angle = (float)index / 100.0f;
+
+ float2 dst_dim = convert_float2(get_image_dim(dst));
+ float2 src_dim = convert_float2(get_image_dim(src));
+
+ float2 dst_cen = dst_dim / 2.0f;
+ float2 src_cen = src_dim / 2.0f;
+
+ int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
+
+ float2 dst_pos = convert_float2(dst_loc) - dst_cen;
+ float2 src_pos = {
+ cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
+ sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
+ };
+ src_pos = src_pos * src_dim / dst_dim;
+
+ float2 src_loc = src_pos + src_cen;
+
+ if (src_loc.x < 0.0f || src_loc.y < 0.0f ||
+ src_loc.x > src_dim.x || src_loc.y > src_dim.y)
+ write_imagef(dst, dst_loc, 0.5f);
+ else
+ write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
+}
+@end verbatim
+
+@item
+Blend two inputs together, with the amount of each input used varying
+with the index counter.
+@verbatim
+__kernel void blend_images(__write_only image2d_t dst,
+ unsigned int index,
+ __read_only image2d_t src1,
+ __read_only image2d_t src2)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+
+ float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
+
+ int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
+ int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
+ int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
+
+ float4 val1 = read_imagef(src1, sampler, src1_loc);
+ float4 val2 = read_imagef(src2, sampler, src2_loc);
+
+ write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
+}
+@end verbatim
+
+@end itemize
+
@section roberts_opencl
Apply the Roberts cross operator (@url{https://en.wikipedia.org/wiki/Roberts_cross}) to input video stream.
@end example
@end itemize
+@section xfade_opencl
+
+Cross fade two videos with custom transition effect by using OpenCL.
+
+It accepts the following options:
+
+@table @option
+@item transition
+Set one of possible transition effects.
+
+@table @option
+@item custom
+Select custom transition effect, the actual transition description
+will be picked from source and kernel options.
+
+@item fade
+@item wipeleft
+@item wiperight
+@item wipeup
+@item wipedown
+@item slideleft
+@item slideright
+@item slideup
+@item slidedown
+
+Default transition is fade.
+@end table
+
+@item source
+OpenCL program source file for custom transition.
+
+@item kernel
+Set name of kernel to use for custom transition from program source file.
+
+@item duration
+Set duration of video transition.
+
+@item offset
+Set time of start of transition relative to first video.
+@end table
+
+The program source file must contain a kernel function with the given name,
+which will be run once for each plane of the output. Each run on a plane
+gets enqueued as a separate 2D global NDRange with one work-item for each
+pixel to be generated. The global ID offset for each work-item is therefore
+the coordinates of a pixel in the destination image.
+
+The kernel function needs to take the following arguments:
+@itemize
+@item
+Destination image, @var{__write_only image2d_t}.
+
+This image will become the output; the kernel should write all of it.
+
+@item
+First Source image, @var{__read_only image2d_t}.
+Second Source image, @var{__read_only image2d_t}.
+
+These are the most recent images on each input. The kernel may read from
+them to generate the output, but they can't be written to.
+
+@item
+Transition progress, @var{float}. This value is always between 0 and 1 inclusive.
+@end itemize
+
+Example programs:
+
+@itemize
+@item
+Apply dots curtain transition effect:
+@verbatim
+__kernel void blend_images(__write_only image2d_t dst,
+ __read_only image2d_t src1,
+ __read_only image2d_t src2,
+ float progress)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+ int2 p = (int2)(get_global_id(0), get_global_id(1));
+ float2 rp = (float2)(get_global_id(0), get_global_id(1));
+ float2 dim = (float2)(get_image_dim(src1).x, get_image_dim(src1).y);
+ rp = rp / dim;
+
+ float2 dots = (float2)(20.0, 20.0);
+ float2 center = (float2)(0,0);
+ float2 unused;
+
+ float4 val1 = read_imagef(src1, sampler, p);
+ float4 val2 = read_imagef(src2, sampler, p);
+ bool next = distance(fract(rp * dots, &unused), (float2)(0.5, 0.5)) < (progress / distance(rp, center));
+
+ write_imagef(dst, p, next ? val1 : val2);
+}
+@end verbatim
+
+@end itemize
+
@c man end OPENCL VIDEO FILTERS
+@chapter VAAPI Video Filters
+@c man begin VAAPI VIDEO FILTERS
+
+VAAPI Video filters are usually used with VAAPI decoder and VAAPI encoder. Below is a description of VAAPI video filters.
+
+To enable compilation of these filters you need to configure FFmpeg with
+@code{--enable-vaapi}.
+
+To use vaapi filters, you need to setup the vaapi device correctly. For more information, please read @url{https://trac.ffmpeg.org/wiki/Hardware/VAAPI}
+
+@section tonemap_vaapi
+
+Perform HDR(High Dynamic Range) to SDR(Standard Dynamic Range) conversion with tone-mapping.
+It maps the dynamic range of HDR10 content to the SDR content.
+It currently only accepts HDR10 as input.
+
+It accepts the following parameters:
+
+@table @option
+@item format
+Specify the output pixel format.
+
+Currently supported formats are:
+@table @var
+@item p010
+@item nv12
+@end table
+
+Default is nv12.
+
+@item primaries, p
+Set the output color primaries.
+
+Default is same as input.
+
+@item transfer, t
+Set the output transfer characteristics.
+
+Default is bt709.
+
+@item matrix, m
+Set the output colorspace matrix.
+
+Default is same as input.
+
+@end table
+
+@subsection Example
+
+@itemize
+@item
+Convert HDR(HDR10) video to bt2020-transfer-characteristic p010 format
+@example
+tonemap_vaapi=format=p010:t=bt2020-10
+@end example
+@end itemize
+
+@c man end VAAPI VIDEO FILTERS
+
@chapter Video Sources
@c man begin VIDEO SOURCES
The sample (pixel) aspect ratio of the input video.
@item sws_param
-Specify the optional parameters to be used for the scale filter which
-is automatically inserted when an input change is detected in the
-input size or format.
+This option is deprecated and ignored. Prepend @code{sws_flags=@var{flags};}
+to the filtergraph description to specify swscale flags for automatically
+inserted scalers. See @ref{Filtergraph syntax}.
@item hw_frames_ctx
When using a hardware pixel format, this should be a reference to an
Alternatively, the options can be specified as a flat string, but this
syntax is deprecated:
-@var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}[:@var{sws_param}]
+@var{width}:@var{height}:@var{pix_fmt}:@var{time_base.num}:@var{time_base.den}:@var{pixel_aspect.num}:@var{pixel_aspect.den}
@section cellauto
for standard output. If @code{file} option is not set, output is written to the log
with AV_LOG_INFO loglevel.
+@item direct
+Reduces buffering in print mode when output is written to a URL set using @var{file}.
+
@end table
@subsection Examples