Specify weight of each input audio stream as sequence.
Each weight is separated by space. By default all inputs have same weight.
-@item sum
-Do not scale inputs but instead do only summation of samples.
-Beware of heavy clipping if inputs are not normalized prior of filtering
-or output from @var{amix} normalized after filtering. By default is disabled.
+@item normalize
+Always scale inputs instead of only doing summation of samples.
+Beware of heavy clipping if inputs are not normalized prior or after filtering
+by this filter if this option is disabled. By default is enabled.
@end table
@subsection Commands
Specify the width and height of the logo to clear. They must be
specified.
-@item band, t
-Specify the thickness of the fuzzy edge of the rectangle (added to
-@var{w} and @var{h}). The default value is 1. This option is
-deprecated, setting higher values should no longer be necessary and
-is not recommended.
-
@item show
When set to 1, a green rectangle is drawn on the screen to simplify
finding the right @var{x}, @var{y}, @var{w}, and @var{h} parameters.
@itemize
@item
Set a rectangle covering the area with top left corner coordinates 0,0
-and size 100x77, and a band of size 10:
+and size 100x77:
@example
-delogo=x=0:y=0:w=100:h=77:band=10
+delogo=x=0:y=0:w=100:h=77
@end example
@end itemize
@item tensorflow
TensorFlow backend. To enable this backend you
need to install the TensorFlow for C library (see
-@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
+@url{https://www.tensorflow.org/install/lang_c}) and configure FFmpeg with
@code{--enable-libtensorflow}
@end table
Default value is @samp{native}.
@end example
@end itemize
+@section dnn_detect
+
+Do object detection with deep neural networks.
+
+The filter accepts the following options:
+
+@table @option
+@item dnn_backend
+Specify which DNN backend to use for model loading and execution. This option accepts
+only openvino now, tensorflow backends will be added.
+
+@item model
+Set path to model file specifying network architecture and its parameters.
+Note that different backends use different file formats.
+
+@item input
+Set the input name of the dnn network.
+
+@item output
+Set the output name of the dnn network.
+
+@item confidence
+Set the confidence threshold (default: 0.5).
+
+@item labels
+Set path to label file specifying the mapping between label id and name.
+Each label name is written in one line, tailing spaces and empty lines are skipped.
+The first line is the name of label id 0 (usually it is 'background'),
+and the second line is the name of label id 1, etc.
+The label id is considered as name if the label file is not provided.
+
+@item backend_configs
+Set the configs to be passed into backend
+
+@item async
+use DNN async execution if set (default: set),
+roll back to sync execution if the backend does not support async.
+
+@end table
+
@anchor{dnn_processing}
@section dnn_processing
@item tensorflow
TensorFlow backend. To enable this backend you
need to install the TensorFlow for C library (see
-@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
+@url{https://www.tensorflow.org/install/lang_c}) and configure FFmpeg with
@code{--enable-libtensorflow}
@item openvino
@item output
Set the output name of the dnn network.
+@item backend_configs
+Set the configs to be passed into backend
+
+For tensorflow backend, you can set its configs with @option{sess_config} options,
+please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system.
+
@item async
use DNN async execution if set (default: set),
roll back to sync execution if the backend does not support async.
@end example
@item
-Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes frame size, for format yuv420p (planar YUV formats supported):
+Handle the Y channel with espcn.pb (see @ref{sr} filter), which changes frame size, for format yuv420p (planar YUV formats supported),
+please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system.
@example
-./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y -y tmp.espcn.jpg
+./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y:backend_configs=sess_config=0x10022805320e09cdccccccccccec3f20012a01303801 -y tmp.espcn.jpg
@end example
@end itemize
@item pal8
Set pal8 output pixel format. This option does not work with codebook
-length greater than 256.
+length greater than 256. Default is disabled.
@end table
@section entropy
@item xmin, ymin, xmax, ymax
Specifies the rectangle in which to search.
+
+@item discard
+Discard frames where object is not detected. Default is disabled.
@end table
@subsection Examples
The @code{hysteresis} filter also supports the @ref{framesync} options.
+@section identity
+
+Obtain the identity score between two input videos.
+
+This filter takes two input videos.
+
+Both input videos must have the same resolution and pixel format for
+this filter to work correctly. Also it assumes that both inputs
+have the same number of frames, which are compared one by one.
+
+The obtained per component, average, min and max identity score is printed through
+the logging system.
+
+The filter stores the calculated identity scores of each frame in frame metadata.
+
+In the below example the input file @file{main.mpg} being processed is compared
+with the reference file @file{ref.mpg}.
+
+@example
+ffmpeg -i main.mpg -i ref.mpg -lavfi identity -f null -
+@end example
+
@section idet
Detect video interlacing type.
A description of the accepted options follows.
@table @option
-@item nb_inputs
+@item inputs
The number of inputs. If unspecified, it defaults to 2.
@item weights
Syntax is same as option with same name.
@end table
+@section monochrome
+Convert video to gray using custom color filter.
+
+A description of the accepted options follows.
+
+@table @option
+@item cb
+Set the chroma blue spot. Allowed range is from -1 to 1.
+Default value is 0.
+
+@item cr
+Set the chroma red spot. Allowed range is from -1 to 1.
+Default value is 0.
+
+@item size
+Set the color filter size. Allowed range is from .1 to 10.
+Default value is 1.
+
+@item high
+Set the highlights strength. Allowed range is from 0 to 1.
+Default value is 0.
+@end table
+
+@subsection Commands
+
+This filter supports the all above options as @ref{commands}.
+
@section mpdecimate
Drop frames that do not differ greatly from the previous frame in
64*5, and default value for @option{frac} is 0.33.
@end table
+@section msad
+
+Obtain the MSAD (Mean Sum of Absolute Differences) between two input videos.
+
+This filter takes two input videos.
+
+Both input videos must have the same resolution and pixel format for
+this filter to work correctly. Also it assumes that both inputs
+have the same number of frames, which are compared one by one.
+
+The obtained per component, average, min and max MSAD is printed through
+the logging system.
+
+The filter stores the calculated MSAD of each frame in frame metadata.
+
+In the below example the input file @file{main.mpg} being processed is compared
+with the reference file @file{ref.mpg}.
+
+@example
+ffmpeg -i main.mpg -i ref.mpg -lavfi msad -f null -
+@end example
@section negate
Set window Y position, relative offset on Y axis.
@end table
+@subsection Commands
+
+This filter supports same @ref{commands} as options.
+
@section pp
Enable the specified chain of postprocessing subfilters using libpostproc. This
@item tensorflow
TensorFlow backend. To enable this backend you
need to install the TensorFlow for C library (see
-@url{https://www.tensorflow.org/install/install_c}) and configure FFmpeg with
+@url{https://www.tensorflow.org/install/lang_c}) and configure FFmpeg with
@code{--enable-libtensorflow}
@end table
This filter supports the all above options as @ref{commands}.
+@section vif
+
+Obtain the average VIF (Visual Information Fidelity) between two input videos.
+
+This filter takes two input videos.
+
+Both input videos must have the same resolution and pixel format for
+this filter to work correctly. Also it assumes that both inputs
+have the same number of frames, which are compared one by one.
+
+The obtained average VIF score is printed through the logging system.
+
+The filter stores the calculated VIF score of each frame.
+
+In the below example the input file @file{main.mpg} being processed is compared
+with the reference file @file{ref.mpg}.
+
+@example
+ffmpeg -i main.mpg -i ref.mpg -lavfi vif -f null -
+@end example
+
@anchor{vignette}
@section vignette
Apply cross fade from one input video stream to another input video stream.
The cross fade is applied for specified duration.
+Both inputs must be constant frame-rate and have the same resolution, pixel format,
+frame rate and timebase.
+
The filter accepts the following options:
@table @option
@item d
Set the duration expression in number of frames.
This sets for how many number of frames effect will last for
-single input image.
+single input image. Default is 90.
@item s
Set the output image size, default is 'hd720'.
@item npl
Set the nominal peak luminance.
+
+@item param_a
+Parameter A for scaling filters. Parameter "b" for bicubic, and the number of
+filter taps for lanczos.
+
+@item param_b
+Parameter B for scaling filters. Parameter "c" for bicubic.
@end table
The values of the @option{w} and @option{h} options are expressions