4 Muxers are configured elements in FFmpeg which allow writing
5 multimedia streams to a particular type of file.
7 When you configure your FFmpeg build, all the supported muxers
8 are enabled by default. You can list all available muxers using the
9 configure option @code{--list-muxers}.
11 You can disable all the muxers with the configure option
12 @code{--disable-muxers} and selectively enable / disable single muxers
13 with the options @code{--enable-muxer=@var{MUXER}} /
14 @code{--disable-muxer=@var{MUXER}}.
16 The option @code{-formats} of the ff* tools will display the list of
19 A description of some of the currently available muxers follows.
24 CRC (Cyclic Redundancy Check) testing format.
26 This muxer computes and prints the Adler-32 CRC of all the input audio
27 and video frames. By default audio frames are converted to signed
28 16-bit raw audio and video frames to raw video before computing the
31 The output of the muxer consists of a single line of the form:
32 CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
33 8 digits containing the CRC for all the decoded input frames.
35 For example to compute the CRC of the input, and store it in the file
38 ffmpeg -i INPUT -f crc out.crc
41 You can print the CRC to stdout with the command:
43 ffmpeg -i INPUT -f crc -
46 You can select the output format of each frame with @command{ffmpeg} by
47 specifying the audio and video codec and format. For example to
48 compute the CRC of the input audio converted to PCM unsigned 8-bit
49 and the input video converted to MPEG-2 video, use the command:
51 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
54 See also the @ref{framecrc} muxer.
59 Per-packet CRC (Cyclic Redundancy Check) testing format.
61 This muxer computes and prints the Adler-32 CRC for each audio
62 and video packet. By default audio frames are converted to signed
63 16-bit raw audio and video frames to raw video before computing the
66 The output of the muxer consists of a line for each audio and video
69 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
72 @var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
75 For example to compute the CRC of the audio and video frames in
76 @file{INPUT}, converted to raw audio and video packets, and store it
77 in the file @file{out.crc}:
79 ffmpeg -i INPUT -f framecrc out.crc
82 To print the information to stdout, use the command:
84 ffmpeg -i INPUT -f framecrc -
87 With @command{ffmpeg}, you can select the output format to which the
88 audio and video frames are encoded before computing the CRC for each
89 packet by specifying the audio and video codec. For example, to
90 compute the CRC of each decoded input audio frame converted to PCM
91 unsigned 8-bit and of each decoded input video frame converted to
92 MPEG-2 video, use the command:
94 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
97 See also the @ref{crc} muxer.
102 Per-packet MD5 testing format.
104 This muxer computes and prints the MD5 hash for each audio
105 and video packet. By default audio frames are converted to signed
106 16-bit raw audio and video frames to raw video before computing the
109 The output of the muxer consists of a line for each audio and video
112 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5}
115 @var{MD5} is a hexadecimal number representing the computed MD5 hash
118 For example to compute the MD5 of the audio and video frames in
119 @file{INPUT}, converted to raw audio and video packets, and store it
120 in the file @file{out.md5}:
122 ffmpeg -i INPUT -f framemd5 out.md5
125 To print the information to stdout, use the command:
127 ffmpeg -i INPUT -f framemd5 -
130 See also the @ref{md5} muxer.
137 Microsoft's icon file format (ICO) has some strict limitations that should be noted:
141 Size cannot exceed 256 pixels in any dimension
144 Only BMP and PNG images can be stored
147 If a BMP image is used, it must be one of the following pixel formats:
149 BMP Bit Depth FFmpeg Pixel Format
159 If a BMP image is used, it must use the BITMAPINFOHEADER DIB header
162 If a PNG image is used, it must use the rgba pixel format
170 The image file muxer writes video frames to image files.
172 The output filenames are specified by a pattern, which can be used to
173 produce sequentially numbered series of files.
174 The pattern may contain the string "%d" or "%0@var{N}d", this string
175 specifies the position of the characters representing a numbering in
176 the filenames. If the form "%0@var{N}d" is used, the string
177 representing the number in each filename is 0-padded to @var{N}
178 digits. The literal character '%' can be specified in the pattern with
181 If the pattern contains "%d" or "%0@var{N}d", the first filename of
182 the file list specified will contain the number 1, all the following
183 numbers will be sequential.
185 The pattern may contain a suffix which is used to automatically
186 determine the format of the image files to write.
188 For example the pattern "img-%03d.bmp" will specify a sequence of
189 filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
190 @file{img-010.bmp}, etc.
191 The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
192 form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
195 The following example shows how to use @command{ffmpeg} for creating a
196 sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
197 taking one image every second from the input video:
199 ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
202 Note that with @command{ffmpeg}, if the format is not specified with the
203 @code{-f} option and the output filename specifies an image file
204 format, the image2 muxer is automatically selected, so the previous
205 command can be written as:
207 ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
210 Note also that the pattern must not necessarily contain "%d" or
211 "%0@var{N}d", for example to create a single image file
212 @file{img.jpeg} from the input video you can employ the command:
214 ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
217 The image muxer supports the .Y.U.V image file format. This format is
218 special in that that each image frame consists of three files, for
219 each of the YUV420P components. To read or write this image file format,
220 specify the name of the '.Y' file. The muxer will automatically open the
221 '.U' and '.V' files as required.
228 This muxer computes and prints the MD5 hash of all the input audio
229 and video frames. By default audio frames are converted to signed
230 16-bit raw audio and video frames to raw video before computing the
233 The output of the muxer consists of a single line of the form:
234 MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing
235 the computed MD5 hash.
237 For example to compute the MD5 hash of the input converted to raw
238 audio and video, and store it in the file @file{out.md5}:
240 ffmpeg -i INPUT -f md5 out.md5
243 You can print the MD5 to stdout with the command:
245 ffmpeg -i INPUT -f md5 -
248 See also the @ref{framemd5} muxer.
250 @section MOV/MP4/ISMV
252 The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
253 file has all the metadata about all packets stored in one location
254 (written at the end of the file, it can be moved to the start for
255 better playback using the @command{qt-faststart} tool). A fragmented
256 file consists of a number of fragments, where packets and metadata
257 about these packets are stored together. Writing a fragmented
258 file has the advantage that the file is decodable even if the
259 writing is interrupted (while a normal MOV/MP4 is undecodable if
260 it is not properly finished), and it requires less memory when writing
261 very long files (since writing normal MOV/MP4 files stores info about
262 every single packet in memory until the file is closed). The downside
263 is that it is less compatible with other applications.
265 Fragmentation is enabled by setting one of the AVOptions that define
266 how to cut the file into fragments:
269 @item -moov_size @var{bytes}
270 Reserves space for the moov atom at the beginning of the file instead of placing the
271 moov atom at the end. If the space reserved is insufficient, muxing will fail.
272 @item -movflags frag_keyframe
273 Start a new fragment at each video keyframe.
274 @item -frag_duration @var{duration}
275 Create fragments that are @var{duration} microseconds long.
276 @item -frag_size @var{size}
277 Create fragments that contain up to @var{size} bytes of payload data.
278 @item -movflags frag_custom
279 Allow the caller to manually choose when to cut fragments, by
280 calling @code{av_write_frame(ctx, NULL)} to write a fragment with
281 the packets written so far. (This is only useful with other
282 applications integrating libavformat, not from @command{ffmpeg}.)
283 @item -min_frag_duration @var{duration}
284 Don't create fragments that are shorter than @var{duration} microseconds long.
287 If more than one condition is specified, fragments are cut when
288 one of the specified conditions is fulfilled. The exception to this is
289 @code{-min_frag_duration}, which has to be fulfilled for any of the other
292 Additionally, the way the output file is written can be adjusted
293 through a few other options:
296 @item -movflags empty_moov
297 Write an initial moov atom directly at the start of the file, without
298 describing any samples in it. Generally, an mdat/moov pair is written
299 at the start of the file, as a normal MOV/MP4 file, containing only
300 a short portion of the file. With this option set, there is no initial
301 mdat atom, and the moov atom only describes the tracks but has
304 Files written with this option set do not work in QuickTime.
305 This option is implicitly set when writing ismv (Smooth Streaming) files.
306 @item -movflags separate_moof
307 Write a separate moof (movie fragment) atom for each track. Normally,
308 packets for all tracks are written in a moof atom (which is slightly
309 more efficient), but with this option set, the muxer writes one moof/mdat
310 pair for each track, making it easier to separate tracks.
312 This option is implicitly set when writing ismv (Smooth Streaming) files.
315 Smooth Streaming content can be pushed in real time to a publishing
316 point on IIS with this muxer. Example:
318 ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
323 MPEG transport stream muxer.
325 This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
327 The muxer options are:
330 @item -mpegts_original_network_id @var{number}
331 Set the original_network_id (default 0x0001). This is unique identifier
332 of a network in DVB. Its main use is in the unique identification of a
333 service through the path Original_Network_ID, Transport_Stream_ID.
334 @item -mpegts_transport_stream_id @var{number}
335 Set the transport_stream_id (default 0x0001). This identifies a
337 @item -mpegts_service_id @var{number}
338 Set the service_id (default 0x0001) also known as program in DVB.
339 @item -mpegts_pmt_start_pid @var{number}
340 Set the first PID for PMT (default 0x1000, max 0x1f00).
341 @item -mpegts_start_pid @var{number}
342 Set the first PID for data packets (default 0x0100, max 0x0f00).
345 The recognized metadata settings in mpegts muxer are @code{service_provider}
346 and @code{service_name}. If they are not set the default for
347 @code{service_provider} is "FFmpeg" and the default for
348 @code{service_name} is "Service01".
351 ffmpeg -i file.mpg -c copy \
352 -mpegts_original_network_id 0x1122 \
353 -mpegts_transport_stream_id 0x3344 \
354 -mpegts_service_id 0x5566 \
355 -mpegts_pmt_start_pid 0x1500 \
356 -mpegts_start_pid 0x150 \
357 -metadata service_provider="Some provider" \
358 -metadata service_name="Some Channel" \
366 This muxer does not generate any output file, it is mainly useful for
367 testing or benchmarking purposes.
369 For example to benchmark decoding with @command{ffmpeg} you can use the
372 ffmpeg -benchmark -i INPUT -f null out.null
375 Note that the above command does not read or write the @file{out.null}
376 file, but specifying the output file is required by the @command{ffmpeg}
379 Alternatively you can write the command as:
381 ffmpeg -benchmark -i INPUT -f null -
386 Matroska container muxer.
388 This muxer implements the matroska and webm container specs.
390 The recognized metadata settings in this muxer are:
394 @item title=@var{title name}
395 Name provided to a single track
400 @item language=@var{language name}
401 Specifies the language of the track in the Matroska languages form
406 @item stereo_mode=@var{mode}
407 Stereo 3D video layout of two views in a single video track
412 Both views are arranged side by side, Left-eye view is on the left
414 Both views are arranged in top-bottom orientation, Left-eye view is at bottom
416 Both views are arranged in top-bottom orientation, Left-eye view is on top
417 @item checkerboard_rl
418 Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
419 @item checkerboard_lr
420 Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
421 @item row_interleaved_rl
422 Each view is constituted by a row based interleaving, Right-eye view is first row
423 @item row_interleaved_lr
424 Each view is constituted by a row based interleaving, Left-eye view is first row
425 @item col_interleaved_rl
426 Both views are arranged in a column based interleaving manner, Right-eye view is first column
427 @item col_interleaved_lr
428 Both views are arranged in a column based interleaving manner, Left-eye view is first column
429 @item anaglyph_cyan_red
430 All frames are in anaglyph format viewable through red-cyan filters
432 Both views are arranged side by side, Right-eye view is on the left
433 @item anaglyph_green_magenta
434 All frames are in anaglyph format viewable through green-magenta filters
436 Both eyes laced in one Block, Left-eye view is first
438 Both eyes laced in one Block, Right-eye view is first
442 For example a 3D WebM clip can be created using the following command line:
444 ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
447 @section segment, stream_segment, ssegment
449 Basic stream segmenter.
451 The segmenter muxer outputs streams to a number of separate files of nearly
452 fixed duration. Output filename pattern can be set in a fashion similar to
455 @code{stream_segment} is a variant of the muxer used to write to
456 streaming output formats, i.e. which do not require global headers,
457 and is recommended for outputting e.g. to MPEG transport stream segments.
458 @code{ssegment} is a shorter alias for @code{stream_segment}.
460 Every segment starts with a video keyframe, if a video stream is present.
461 Note that if you want accurate splitting for a video file, you need to
462 make the input key frames correspond to the exact splitting times
463 expected by the segmenter, or the segment muxer will start the new
464 segment with the key frame found next after the specified start
467 The segment muxer works best with a single constant frame rate video.
469 Optionally it can generate a list of the created segments, by setting
470 the option @var{segment_list}. The list type is specified by the
471 @var{segment_list_type} option.
473 The segment muxer supports the following options:
476 @item segment_format @var{format}
477 Override the inner container format, by default it is guessed by the filename
479 @item segment_list @var{name}
480 Generate also a listfile named @var{name}. If not specified no
481 listfile is generated.
482 @item segment_list_size @var{size}
483 Overwrite the listfile once it reaches @var{size} entries. If 0
484 the listfile is never overwritten. Default value is 0.
485 @item segment_list type @var{type}
486 Specify the format for the segment list file.
488 The following values are recognized:
491 Generate a flat list for the created segments, one segment per line.
494 Generate a list for the created segments, one segment per line,
495 each line matching the format (comma-separated values):
497 @var{segment_filename},@var{segment_start_time},@var{segment_end_time}
500 @var{segment_filename} is the name of the output file generated by the
501 muxer according to the provided pattern. CSV escaping (according to
502 RFC4180) is applied if required.
504 @var{segment_start_time} and @var{segment_end_time} specify
505 the segment start and end time expressed in seconds.
507 A list file with the suffix @code{".csv"} or @code{".ext"} will
508 auto-select this format.
510 @code{ext} is deprecated in favor or @code{csv}.
513 Generate an extended M3U8 file, version 4, compliant with
514 @url{http://tools.ietf.org/id/draft-pantos-http-live-streaming-08.txt}.
516 A list file with the suffix @code{".m3u8"} will auto-select this format.
519 If not specified the type is guessed from the list file name suffix.
520 @item segment_time @var{time}
521 Set segment duration to @var{time}. Default value is "2".
522 @item segment_time_delta @var{delta}
523 Specify the accuracy time when selecting the start time for a
524 segment. Default value is "0".
526 When delta is specified a key-frame will start a new segment if its
527 PTS satisfies the relation:
529 PTS >= start_time - time_delta
532 This option is useful when splitting video content, which is always
533 split at GOP boundaries, in case a key frame is found just before the
534 specified split time.
536 In particular may be used in combination with the @file{ffmpeg} option
537 @var{force_key_frames}. The key frame times specified by
538 @var{force_key_frames} may not be set accurately because of rounding
539 issues, with the consequence that a key frame time may result set just
540 before the specified time. For constant frame rate videos a value of
541 1/2*@var{frame_rate} should address the worst case mismatch between
542 the specified time and the time set by @var{force_key_frames}.
544 @item segment_times @var{times}
545 Specify a list of split points. @var{times} contains a list of comma
546 separated duration specifications, in increasing order.
547 @item segment_wrap @var{limit}
548 Wrap around segment index once it reaches @var{limit}.
551 Some examples follow.
555 To remux the content of file @file{in.mkv} to a list of segments
556 @file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
557 generated segments to @file{out.list}:
559 ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
563 As the example above, but segment the input file according to the split
564 points specified by the @var{segment_times} option:
566 ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
570 As the example above, but use the @code{ffmpeg} @var{force_key_frames}
571 option to force key frames in the input at the specified location, together
572 with the segment option @var{segment_time_delta} to account for
573 possible roundings operated when setting key frame times.
575 ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -vcodec mpeg4 -acodec pcm_s16le -map 0 \
576 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
578 In order to force key frames on the input file, transcoding is
582 To convert the @file{in.mkv} to TS segments using the @code{libx264}
583 and @code{libfaac} encoders:
585 ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
591 The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
592 optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
593 @code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
594 not written by default, but may be enabled with the @code{write_id3v1} option.
596 For seekable output the muxer also writes a Xing frame at the beginning, which
597 contains the number of frames in the file. It is useful for computing duration
600 The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
601 are supplied to the muxer in form of a video stream with a single packet. There
602 can be any number of those streams, each will correspond to a single APIC frame.
603 The stream metadata tags @var{title} and @var{comment} map to APIC
604 @var{description} and @var{picture type} respectively. See
605 @url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
607 Note that the APIC frames must be written at the beginning, so the muxer will
608 buffer the audio frames until it gets all the pictures. It is therefore advised
609 to provide the pictures as soon as possible to avoid excessive buffering.
613 Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
615 ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
618 Attach a picture to an mp3:
620 ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"
621 -metadata:s:v comment="Cover (Front)" out.mp3