Set additional parameter which controls sigmoid function.
@end table
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section asr
Automatic Speech Recognition
@end itemize
+@section afirsrc
+
+Generate a FIR coefficients using frequency sampling method.
+
+The resulting stream can be used with @ref{afir} filter for filtering the audio signal.
+
+The filter accepts the following options:
+
+@table @option
+@item taps, t
+Set number of filter coefficents in output audio stream.
+Default value is 1025.
+
+@item frequency, f
+Set frequency points from where magnitude and phase are set.
+This must be in non decreasing order, and first element must be 0, while last element
+must be 1. Elements are separated by white spaces.
+
+@item magnitude, m
+Set magnitude value for every frequency point set by @option{frequency}.
+Number of values must be same as number of frequency points.
+Values are separated by white spaces.
+
+@item phase, p
+Set phase value for every frequency point set by @option{frequency}.
+Number of values must be same as number of frequency points.
+Values are separated by white spaces.
+
+@item sample_rate, r
+Set sample rate, default is 44100.
+
+@item nb_samples, n
+Set number of samples per each frame. Default is 1024.
+
+@item win_func, w
+Set window function. Default is blackman.
+@end table
+
@section anullsrc
The null audio source, return unprocessed audio frames. It is mainly useful
@end table
-@section blend, tblend
+@anchor{blend}
+@section blend
Blend two video frames into each other.
The default value is @code{all}.
@end table
+@section cas
+
+Apply Contrast Adaptive Sharpen filter to video stream.
+
+The filter accepts the following options:
+
+@table @option
+@item strength
+Set the sharpening strength. Default value is 0.
+
+@item planes
+Set planes to filter. Default value is to filter all
+planes except alpha plane.
+@end table
+
@section chromahold
Remove all color information for all colors except for certain one.
@end example
@end itemize
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section colorhold
Remove all color information for all RGB colors except for certain one.
Higher values result in more preserved color.
@end table
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+If the specified expression is not valid, it is kept at its current
+value.
+
@section colorlevels
Adjust video input frames using levels.
@end example
@end itemize
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section colormatrix
Convert color matrix.
Return the value of the pixel at location (@var{x},@var{y}) of the alpha
plane. Return 0 if there is no such plane.
+@item psum(x,y), lumsum(x, y), cbsum(x,y), crsum(x,y), rsum(x,y), gsum(x,y), bsum(x,y), alphasum(x,y)
+Sum of sample values in the rectangle from (0,0) to (x,y), this allows obtaining
+sums of samples within a rectangle. See the functions without the sum postfix.
+
@item interpolation
Set one of interpolation methods:
@table @option
For functions, if @var{x} and @var{y} are outside the area, the value will be
automatically clipped to the closer edge.
+Please note that this filter can use multiple threads in which case each slice
+will have its own expression state. If you want to use only a single expression
+state because your expressions depend on previous state then you should limit
+the number of filter threads to 1.
+
@subsection Examples
@itemize
The device to upload to must be supplied when the filter is initialised. If
using ffmpeg, select the appropriate device with the @option{-filter_hw_device}
-option.
+option or with the @option{derive_device} option. The input and output devices
+must be of different types and compatible - the exact meaning of this is
+system-dependent, but typically it means that they must refer to the same
+underlying hardware context (for example, refer to the same graphics card).
+
+The following additional parameters are accepted:
+
+@table @option
+@item derive_device @var{type}
+Rather than using the device supplied at initialisation, instead derive a new
+device of type @var{type} from the device the input frames exist on.
+@end table
@anchor{hwupload_cuda}
@section hwupload_cuda
Set vertical radius size. Default value is @code{0}.
Allowed range is integer from 0 to 127.
If it is 0, value will be picked from horizontal @code{radius} option.
+
+@item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} will pick
+minimum values, and @code{1} maximum values.
@end table
@subsection Commands
Set value which will be added to filtered result.
@end table
-@anchor{program_opencl}
-@section program_opencl
-
-Filter video using an OpenCL program.
-
-@table @option
-
-@item source
-OpenCL program source file.
-
-@item kernel
-Kernel name in program.
-
-@item inputs
-Number of inputs to the filter. Defaults to 1.
-
-@item size, s
-Size of output frames. Defaults to the same as the first input.
-
-@end table
-
-The program source file must contain a kernel function with the given name,
-which will be run once for each plane of the output. Each run on a plane
-gets enqueued as a separate 2D global NDRange with one work-item for each
-pixel to be generated. The global ID offset for each work-item is therefore
-the coordinates of a pixel in the destination image.
-
-The kernel function needs to take the following arguments:
-@itemize
-@item
-Destination image, @var{__write_only image2d_t}.
-
-This image will become the output; the kernel should write all of it.
-@item
-Frame index, @var{unsigned int}.
-
-This is a counter starting from zero and increasing by one for each frame.
-@item
-Source images, @var{__read_only image2d_t}.
-
-These are the most recent images on each input. The kernel may read from
-them to generate the output, but they can't be written to.
-@end itemize
-
-Example programs:
-
-@itemize
-@item
-Copy the input to the output (output must be the same size as the input).
-@verbatim
-__kernel void copy(__write_only image2d_t destination,
- unsigned int index,
- __read_only image2d_t source)
-{
- const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
-
- int2 location = (int2)(get_global_id(0), get_global_id(1));
-
- float4 value = read_imagef(source, sampler, location);
-
- write_imagef(destination, location, value);
-}
-@end verbatim
-
-@item
-Apply a simple transformation, rotating the input by an amount increasing
-with the index counter. Pixel values are linearly interpolated by the
-sampler, and the output need not have the same dimensions as the input.
-@verbatim
-__kernel void rotate_image(__write_only image2d_t dst,
- unsigned int index,
- __read_only image2d_t src)
-{
- const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
- CLK_FILTER_LINEAR);
-
- float angle = (float)index / 100.0f;
-
- float2 dst_dim = convert_float2(get_image_dim(dst));
- float2 src_dim = convert_float2(get_image_dim(src));
-
- float2 dst_cen = dst_dim / 2.0f;
- float2 src_cen = src_dim / 2.0f;
-
- int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
-
- float2 dst_pos = convert_float2(dst_loc) - dst_cen;
- float2 src_pos = {
- cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
- sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
- };
- src_pos = src_pos * src_dim / dst_dim;
-
- float2 src_loc = src_pos + src_cen;
-
- if (src_loc.x < 0.0f || src_loc.y < 0.0f ||
- src_loc.x > src_dim.x || src_loc.y > src_dim.y)
- write_imagef(dst, dst_loc, 0.5f);
- else
- write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
-}
-@end verbatim
-
-@item
-Blend two inputs together, with the amount of each input used varying
-with the index counter.
-@verbatim
-__kernel void blend_images(__write_only image2d_t dst,
- unsigned int index,
- __read_only image2d_t src1,
- __read_only image2d_t src2)
-{
- const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
- CLK_FILTER_LINEAR);
-
- float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
-
- int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
- int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
- int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
-
- float4 val1 = read_imagef(src1, sampler, src1_loc);
- float4 val2 = read_imagef(src2, sampler, src2_loc);
-
- write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
-}
-@end verbatim
-
-@end itemize
-
@section pseudocolor
Alter frame colors in video with pseudocolors.
@item format
Specify pixel format of output from this filter. Can be @code{color} or @code{gray}.
Default is @code{color}.
+
+@item fill
+Specify the color of the unmapped pixels. For the syntax of this option,
+check the @ref{color syntax,,"Color" section in the ffmpeg-utils
+manual,ffmpeg-utils}. Default color is @code{black}.
@end table
@section removegrain
@section swapuv
Swap U & V plane.
+@section tblend
+Blend successive video frames.
+
+See @ref{blend}
+
@section telecine
Apply telecine process to the video.
@item barrel
@item fb
-Facebook's 360 format.
+@item barrelsplit
+Facebook's 360 formats.
@item sg
Stereographic format.
@item out_trans
Set if output video needs to be transposed. Boolean value, by default disabled.
+@item alpha_mask
+Build mask in alpha plane for all unmapped pixels by marking them fully transparent. Boolean value, by default disabled.
@end table
@subsection Examples
@end example
@end itemize
+@subsection Commands
+
+This filter supports subset of above options as @ref{commands}.
+
@section vaguedenoiser
Apply a wavelet based denoiser.
Default is @code{3}.
@end table
+@section xfade
+
+Apply cross fade from one input video stream to another input video stream.
+The cross fade is applied for specified duration.
+
+The filter accepts the following options:
+
+@table @option
+@item transition
+Set one of available transition effects:
+
+@table @samp
+@item custom
+@item fade
+@item wipeleft
+@item wiperight
+@item wipeup
+@item wipedown
+@item slideleft
+@item slideright
+@item slideup
+@item slidedown
+@item circlecrop
+@item rectcrop
+@item distance
+@item fadeblack
+@item fadewhite
+@item radial
+@item smoothleft
+@item smoothright
+@item smoothup
+@item smoothdown
+@item circleopen
+@item circleclose
+@item vertopen
+@item vertclose
+@item horzopen
+@item horzclose
+@item dissolve
+@item pixelize
+@item diagtl
+@item diagtr
+@item diagbl
+@item diagbr
+@end table
+Default transition effect is fade.
+
+@item duration
+Set cross fade duration in seconds.
+Default duration is 1 second.
+
+@item offset
+Set cross fade start relative to first input stream in seconds.
+Default offset is 0.
+
+@item expr
+Set expression for custom transition effect.
+
+The expressions can use the following variables and functions:
+
+@table @option
+@item X
+@item Y
+The coordinates of the current sample.
+
+@item W
+@item H
+The width and height of the image.
+
+@item P
+Progress of transition effect.
+
+@item PLANE
+Currently processed plane.
+
+@item A
+Return value of first input at current location and plane.
+
+@item B
+Return value of second input at current location and plane.
+
+@item a0(x, y)
+@item a1(x, y)
+@item a2(x, y)
+@item a3(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+first/second/third/fourth component of first input.
+
+@item b0(x, y)
+@item b1(x, y)
+@item b2(x, y)
+@item b3(x, y)
+Return the value of the pixel at location (@var{x},@var{y}) of the
+first/second/third/fourth component of second input.
+@end table
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Cross fade from one input video to another input video, with fade transition and duration of transition
+of 2 seconds starting at offset of 5 seconds:
+@example
+ffmpeg -i first.mp4 -i second.mp4 -filter_complex xfade=transition=fade:duration=2:offset=5 output.mp4
+@end example
+@end itemize
+
@section xmedian
Pick median pixels from several input videos.
@item planes
Set which planes to filter. Default value is @code{15}, by which all planes are processed.
+
+@item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} will pick
+minimum values, and @code{1} maximum values.
@end table
@section xstack
@end example
@end itemize
-@section convolution_opencl
+@section colorkey_opencl
+RGB colorspace color keying.
-Apply convolution of 3x3, 5x5, 7x7 matrix.
+The filter accepts the following options:
+
+@table @option
+@item color
+The color which will be replaced with transparency.
+
+@item similarity
+Similarity percentage with the key color.
+
+0.01 matches only the exact key color, while 1.0 matches everything.
+
+@item blend
+Blend percentage.
+
+0.0 makes pixels either fully transparent, or not transparent at all.
+
+Higher values result in semi-transparent pixels, with a higher transparency
+the more similar the pixels color is to the key color.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Make every semi-green pixel in the input transparent with some slight blending:
+@example
+-i INPUT -vf "hwupload, colorkey_opencl=green:0.3:0.1, hwdownload" OUTPUT
+@end example
+@end itemize
+
+@section convolution_opencl
+
+Apply convolution of 3x3, 5x5, 7x7 matrix.
The filter accepts the following options:
@end example
@end itemize
-@section dilation_opencl
-
-Apply dilation effect to the video.
-
-This filter replaces the pixel by the local(3x3) maximum.
-
-It accepts the following options:
-
-@table @option
-@item threshold0
-@item threshold1
-@item threshold2
-@item threshold3
-Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.
-If @code{0}, plane will remain unchanged.
-
-@item coordinates
-Flag which specifies the pixel to refer to.
-Range is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.
-
-Flags to local 3x3 coordinates region centered on @code{x}:
-
- 1 2 3
-
- 4 x 5
-
- 6 7 8
-@end table
-
-@subsection Example
-
-@itemize
-@item
-Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.
-@example
--i INPUT -vf "hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
-@end example
-@end itemize
-
@section erosion_opencl
Apply erosion effect to the video.
@end example
@end itemize
-@section colorkey_opencl
-RGB colorspace color keying.
-
-The filter accepts the following options:
-
-@table @option
-@item color
-The color which will be replaced with transparency.
-
-@item similarity
-Similarity percentage with the key color.
-
-0.01 matches only the exact key color, while 1.0 matches everything.
-
-@item blend
-Blend percentage.
-
-0.0 makes pixels either fully transparent, or not transparent at all.
-
-Higher values result in semi-transparent pixels, with a higher transparency
-the more similar the pixels color is to the key color.
-@end table
-
-@subsection Examples
-
-@itemize
-@item
-Make every semi-green pixel in the input transparent with some slight blending:
-@example
--i INPUT -vf "hwupload, colorkey_opencl=green:0.3:0.1, hwdownload" OUTPUT
-@end example
-@end itemize
-
@section deshake_opencl
Feature-point based video stabilization filter.
@end example
@end itemize
+@section dilation_opencl
+
+Apply dilation effect to the video.
+
+This filter replaces the pixel by the local(3x3) maximum.
+
+It accepts the following options:
+
+@table @option
+@item threshold0
+@item threshold1
+@item threshold2
+@item threshold3
+Limit the maximum change for each plane. Range is @code{[0, 65535]} and default value is @code{65535}.
+If @code{0}, plane will remain unchanged.
+
+@item coordinates
+Flag which specifies the pixel to refer to.
+Range is @code{[0, 255]} and default value is @code{255}, i.e. all eight pixels are used.
+
+Flags to local 3x3 coordinates region centered on @code{x}:
+
+ 1 2 3
+
+ 4 x 5
+
+ 6 7 8
+@end table
+
+@subsection Example
+
+@itemize
+@item
+Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.
+@example
+-i INPUT -vf "hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT
+@end example
+@end itemize
+
@section nlmeans_opencl
Non-local Means denoise filter through OpenCL, this filter accepts same options as @ref{nlmeans}.
@end itemize
+@section pad_opencl
+
+Add paddings to the input image, and place the original input at the
+provided @var{x}, @var{y} coordinates.
+
+It accepts the following options:
+
+@table @option
+@item width, w
+@item height, h
+Specify an expression for the size of the output image with the
+paddings added. If the value for @var{width} or @var{height} is 0, the
+corresponding input size is used for the output.
+
+The @var{width} expression can reference the value set by the
+@var{height} expression, and vice versa.
+
+The default value of @var{width} and @var{height} is 0.
+
+@item x
+@item y
+Specify the offsets to place the input image at within the padded area,
+with respect to the top/left border of the output image.
+
+The @var{x} expression can reference the value set by the @var{y}
+expression, and vice versa.
+
+The default value of @var{x} and @var{y} is 0.
+
+If @var{x} or @var{y} evaluate to a negative number, they'll be changed
+so the input image is centered on the padded area.
+
+@item color
+Specify the color of the padded area. For the syntax of this option,
+check the @ref{color syntax,,"Color" section in the ffmpeg-utils
+manual,ffmpeg-utils}.
+
+@item aspect
+Pad to an aspect instead to a resolution.
+@end table
+
+The value for the @var{width}, @var{height}, @var{x}, and @var{y}
+options are expressions containing the following constants:
+
+@table @option
+@item in_w
+@item in_h
+The input video width and height.
+
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
+
+@item out_w
+@item out_h
+The output width and height (the size of the padded area), as
+specified by the @var{width} and @var{height} expressions.
+
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}.
+
+@item x
+@item y
+The x and y offsets as specified by the @var{x} and @var{y}
+expressions, or NAN if not yet specified.
+
+@item a
+same as @var{iw} / @var{ih}
+
+@item sar
+input sample aspect ratio
+
+@item dar
+input display aspect ratio, it is the same as (@var{iw} / @var{ih}) * @var{sar}
+@end table
+
@section prewitt_opencl
Apply the Prewitt operator (@url{https://en.wikipedia.org/wiki/Prewitt_operator}) to input video stream.
@end example
@end itemize
+@anchor{program_opencl}
+@section program_opencl
+
+Filter video using an OpenCL program.
+
+@table @option
+
+@item source
+OpenCL program source file.
+
+@item kernel
+Kernel name in program.
+
+@item inputs
+Number of inputs to the filter. Defaults to 1.
+
+@item size, s
+Size of output frames. Defaults to the same as the first input.
+
+@end table
+
+The program source file must contain a kernel function with the given name,
+which will be run once for each plane of the output. Each run on a plane
+gets enqueued as a separate 2D global NDRange with one work-item for each
+pixel to be generated. The global ID offset for each work-item is therefore
+the coordinates of a pixel in the destination image.
+
+The kernel function needs to take the following arguments:
+@itemize
+@item
+Destination image, @var{__write_only image2d_t}.
+
+This image will become the output; the kernel should write all of it.
+@item
+Frame index, @var{unsigned int}.
+
+This is a counter starting from zero and increasing by one for each frame.
+@item
+Source images, @var{__read_only image2d_t}.
+
+These are the most recent images on each input. The kernel may read from
+them to generate the output, but they can't be written to.
+@end itemize
+
+Example programs:
+
+@itemize
+@item
+Copy the input to the output (output must be the same size as the input).
+@verbatim
+__kernel void copy(__write_only image2d_t destination,
+ unsigned int index,
+ __read_only image2d_t source)
+{
+ const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
+
+ int2 location = (int2)(get_global_id(0), get_global_id(1));
+
+ float4 value = read_imagef(source, sampler, location);
+
+ write_imagef(destination, location, value);
+}
+@end verbatim
+
+@item
+Apply a simple transformation, rotating the input by an amount increasing
+with the index counter. Pixel values are linearly interpolated by the
+sampler, and the output need not have the same dimensions as the input.
+@verbatim
+__kernel void rotate_image(__write_only image2d_t dst,
+ unsigned int index,
+ __read_only image2d_t src)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+
+ float angle = (float)index / 100.0f;
+
+ float2 dst_dim = convert_float2(get_image_dim(dst));
+ float2 src_dim = convert_float2(get_image_dim(src));
+
+ float2 dst_cen = dst_dim / 2.0f;
+ float2 src_cen = src_dim / 2.0f;
+
+ int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
+
+ float2 dst_pos = convert_float2(dst_loc) - dst_cen;
+ float2 src_pos = {
+ cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
+ sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
+ };
+ src_pos = src_pos * src_dim / dst_dim;
+
+ float2 src_loc = src_pos + src_cen;
+
+ if (src_loc.x < 0.0f || src_loc.y < 0.0f ||
+ src_loc.x > src_dim.x || src_loc.y > src_dim.y)
+ write_imagef(dst, dst_loc, 0.5f);
+ else
+ write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
+}
+@end verbatim
+
+@item
+Blend two inputs together, with the amount of each input used varying
+with the index counter.
+@verbatim
+__kernel void blend_images(__write_only image2d_t dst,
+ unsigned int index,
+ __read_only image2d_t src1,
+ __read_only image2d_t src2)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+
+ float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
+
+ int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
+ int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
+ int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
+
+ float4 val1 = read_imagef(src1, sampler, src1_loc);
+ float4 val2 = read_imagef(src2, sampler, src2_loc);
+
+ write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
+}
+@end verbatim
+
+@end itemize
+
@section roberts_opencl
Apply the Roberts cross operator (@url{https://en.wikipedia.org/wiki/Roberts_cross}) to input video stream.
@end example
@end itemize
+@section xfade_opencl
+
+Cross fade two videos with custom transition effect by using OpenCL.
+
+It accepts the following options:
+
+@table @option
+@item transition
+Set one of possible transition effects.
+
+@table @option
+@item custom
+Select custom transition effect, the actual transition description
+will be picked from source and kernel options.
+
+@item fade
+@item wipeleft
+@item wiperight
+@item wipeup
+@item wipedown
+@item slideleft
+@item slideright
+@item slideup
+@item slidedown
+
+Default transition is fade.
+@end table
+
+@item source
+OpenCL program source file for custom transition.
+
+@item kernel
+Set name of kernel to use for custom transition from program source file.
+
+@item duration
+Set duration of video transition.
+
+@item offset
+Set time of start of transition relative to first video.
+@end table
+
+The program source file must contain a kernel function with the given name,
+which will be run once for each plane of the output. Each run on a plane
+gets enqueued as a separate 2D global NDRange with one work-item for each
+pixel to be generated. The global ID offset for each work-item is therefore
+the coordinates of a pixel in the destination image.
+
+The kernel function needs to take the following arguments:
+@itemize
+@item
+Destination image, @var{__write_only image2d_t}.
+
+This image will become the output; the kernel should write all of it.
+
+@item
+First Source image, @var{__read_only image2d_t}.
+Second Source image, @var{__read_only image2d_t}.
+
+These are the most recent images on each input. The kernel may read from
+them to generate the output, but they can't be written to.
+
+@item
+Transition progress, @var{float}. This value is always between 0 and 1 inclusive.
+@end itemize
+
+Example programs:
+
+@itemize
+@item
+Apply dots curtain transition effect:
+@verbatim
+__kernel void blend_images(__write_only image2d_t dst,
+ __read_only image2d_t src1,
+ __read_only image2d_t src2,
+ float progress)
+{
+ const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
+ CLK_FILTER_LINEAR);
+ int2 p = (int2)(get_global_id(0), get_global_id(1));
+ float2 rp = (float2)(get_global_id(0), get_global_id(1));
+ float2 dim = (float2)(get_image_dim(src1).x, get_image_dim(src1).y);
+ rp = rp / dim;
+
+ float2 dots = (float2)(20.0, 20.0);
+ float2 center = (float2)(0,0);
+ float2 unused;
+
+ float4 val1 = read_imagef(src1, sampler, p);
+ float4 val2 = read_imagef(src2, sampler, p);
+ bool next = distance(fract(rp * dots, &unused), (float2)(0.5, 0.5)) < (progress / distance(rp, center));
+
+ write_imagef(dst, p, next ? val1 : val2);
+}
+@end verbatim
+
+@end itemize
+
@c man end OPENCL VIDEO FILTERS
@chapter VAAPI Video Filters
To use vaapi filters, you need to setup the vaapi device correctly. For more information, please read @url{https://trac.ffmpeg.org/wiki/Hardware/VAAPI}
-@section tonemap_vappi
+@section tonemap_vaapi
Perform HDR(High Dynamic Range) to SDR(Standard Dynamic Range) conversion with tone-mapping.
It maps the dynamic range of HDR10 content to the SDR content.