4 Muxers are configured elements in FFmpeg which allow writing
5 multimedia streams to a particular type of file.
7 When you configure your FFmpeg build, all the supported muxers
8 are enabled by default. You can list all available muxers using the
9 configure option @code{--list-muxers}.
11 You can disable all the muxers with the configure option
12 @code{--disable-muxers} and selectively enable / disable single muxers
13 with the options @code{--enable-muxer=@var{MUXER}} /
14 @code{--disable-muxer=@var{MUXER}}.
16 The option @code{-formats} of the ff* tools will display the list of
19 A description of some of the currently available muxers follows.
24 CRC (Cyclic Redundancy Check) testing format.
26 This muxer computes and prints the Adler-32 CRC of all the input audio
27 and video frames. By default audio frames are converted to signed
28 16-bit raw audio and video frames to raw video before computing the
31 The output of the muxer consists of a single line of the form:
32 CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
33 8 digits containing the CRC for all the decoded input frames.
35 For example to compute the CRC of the input, and store it in the file
38 ffmpeg -i INPUT -f crc out.crc
41 You can print the CRC to stdout with the command:
43 ffmpeg -i INPUT -f crc -
46 You can select the output format of each frame with @command{ffmpeg} by
47 specifying the audio and video codec and format. For example to
48 compute the CRC of the input audio converted to PCM unsigned 8-bit
49 and the input video converted to MPEG-2 video, use the command:
51 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
54 See also the @ref{framecrc} muxer.
59 Per-packet CRC (Cyclic Redundancy Check) testing format.
61 This muxer computes and prints the Adler-32 CRC for each audio
62 and video packet. By default audio frames are converted to signed
63 16-bit raw audio and video frames to raw video before computing the
66 The output of the muxer consists of a line for each audio and video
69 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
72 @var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
75 For example to compute the CRC of the audio and video frames in
76 @file{INPUT}, converted to raw audio and video packets, and store it
77 in the file @file{out.crc}:
79 ffmpeg -i INPUT -f framecrc out.crc
82 To print the information to stdout, use the command:
84 ffmpeg -i INPUT -f framecrc -
87 With @command{ffmpeg}, you can select the output format to which the
88 audio and video frames are encoded before computing the CRC for each
89 packet by specifying the audio and video codec. For example, to
90 compute the CRC of each decoded input audio frame converted to PCM
91 unsigned 8-bit and of each decoded input video frame converted to
92 MPEG-2 video, use the command:
94 ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
97 See also the @ref{crc} muxer.
102 Per-packet MD5 testing format.
104 This muxer computes and prints the MD5 hash for each audio
105 and video packet. By default audio frames are converted to signed
106 16-bit raw audio and video frames to raw video before computing the
109 The output of the muxer consists of a line for each audio and video
112 @var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5}
115 @var{MD5} is a hexadecimal number representing the computed MD5 hash
118 For example to compute the MD5 of the audio and video frames in
119 @file{INPUT}, converted to raw audio and video packets, and store it
120 in the file @file{out.md5}:
122 ffmpeg -i INPUT -f framemd5 out.md5
125 To print the information to stdout, use the command:
127 ffmpeg -i INPUT -f framemd5 -
130 See also the @ref{md5} muxer.
137 Microsoft's icon file format (ICO) has some strict limitations that should be noted:
141 Size cannot exceed 256 pixels in any dimension
144 Only BMP and PNG images can be stored
147 If a BMP image is used, it must be one of the following pixel formats:
149 BMP Bit Depth FFmpeg Pixel Format
159 If a BMP image is used, it must use the BITMAPINFOHEADER DIB header
162 If a PNG image is used, it must use the rgba pixel format
170 The image file muxer writes video frames to image files.
172 The output filenames are specified by a pattern, which can be used to
173 produce sequentially numbered series of files.
174 The pattern may contain the string "%d" or "%0@var{N}d", this string
175 specifies the position of the characters representing a numbering in
176 the filenames. If the form "%0@var{N}d" is used, the string
177 representing the number in each filename is 0-padded to @var{N}
178 digits. The literal character '%' can be specified in the pattern with
181 If the pattern contains "%d" or "%0@var{N}d", the first filename of
182 the file list specified will contain the number 1, all the following
183 numbers will be sequential.
185 The pattern may contain a suffix which is used to automatically
186 determine the format of the image files to write.
188 For example the pattern "img-%03d.bmp" will specify a sequence of
189 filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
190 @file{img-010.bmp}, etc.
191 The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
192 form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
195 The following example shows how to use @command{ffmpeg} for creating a
196 sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
197 taking one image every second from the input video:
199 ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
202 Note that with @command{ffmpeg}, if the format is not specified with the
203 @code{-f} option and the output filename specifies an image file
204 format, the image2 muxer is automatically selected, so the previous
205 command can be written as:
207 ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
210 Note also that the pattern must not necessarily contain "%d" or
211 "%0@var{N}d", for example to create a single image file
212 @file{img.jpeg} from the input video you can employ the command:
214 ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
217 The image muxer supports the .Y.U.V image file format. This format is
218 special in that that each image frame consists of three files, for
219 each of the YUV420P components. To read or write this image file format,
220 specify the name of the '.Y' file. The muxer will automatically open the
221 '.U' and '.V' files as required.
228 This muxer computes and prints the MD5 hash of all the input audio
229 and video frames. By default audio frames are converted to signed
230 16-bit raw audio and video frames to raw video before computing the
233 The output of the muxer consists of a single line of the form:
234 MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing
235 the computed MD5 hash.
237 For example to compute the MD5 hash of the input converted to raw
238 audio and video, and store it in the file @file{out.md5}:
240 ffmpeg -i INPUT -f md5 out.md5
243 You can print the MD5 to stdout with the command:
245 ffmpeg -i INPUT -f md5 -
248 See also the @ref{framemd5} muxer.
250 @section MOV/MP4/ISMV
252 The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
253 file has all the metadata about all packets stored in one location
254 (written at the end of the file, it can be moved to the start for
255 better playback by adding @var{faststart} to the @var{movflags}, or
256 using the @command{qt-faststart} tool). A fragmented
257 file consists of a number of fragments, where packets and metadata
258 about these packets are stored together. Writing a fragmented
259 file has the advantage that the file is decodable even if the
260 writing is interrupted (while a normal MOV/MP4 is undecodable if
261 it is not properly finished), and it requires less memory when writing
262 very long files (since writing normal MOV/MP4 files stores info about
263 every single packet in memory until the file is closed). The downside
264 is that it is less compatible with other applications.
266 Fragmentation is enabled by setting one of the AVOptions that define
267 how to cut the file into fragments:
270 @item -moov_size @var{bytes}
271 Reserves space for the moov atom at the beginning of the file instead of placing the
272 moov atom at the end. If the space reserved is insufficient, muxing will fail.
273 @item -movflags frag_keyframe
274 Start a new fragment at each video keyframe.
275 @item -frag_duration @var{duration}
276 Create fragments that are @var{duration} microseconds long.
277 @item -frag_size @var{size}
278 Create fragments that contain up to @var{size} bytes of payload data.
279 @item -movflags frag_custom
280 Allow the caller to manually choose when to cut fragments, by
281 calling @code{av_write_frame(ctx, NULL)} to write a fragment with
282 the packets written so far. (This is only useful with other
283 applications integrating libavformat, not from @command{ffmpeg}.)
284 @item -min_frag_duration @var{duration}
285 Don't create fragments that are shorter than @var{duration} microseconds long.
288 If more than one condition is specified, fragments are cut when
289 one of the specified conditions is fulfilled. The exception to this is
290 @code{-min_frag_duration}, which has to be fulfilled for any of the other
293 Additionally, the way the output file is written can be adjusted
294 through a few other options:
297 @item -movflags empty_moov
298 Write an initial moov atom directly at the start of the file, without
299 describing any samples in it. Generally, an mdat/moov pair is written
300 at the start of the file, as a normal MOV/MP4 file, containing only
301 a short portion of the file. With this option set, there is no initial
302 mdat atom, and the moov atom only describes the tracks but has
305 Files written with this option set do not work in QuickTime.
306 This option is implicitly set when writing ismv (Smooth Streaming) files.
307 @item -movflags separate_moof
308 Write a separate moof (movie fragment) atom for each track. Normally,
309 packets for all tracks are written in a moof atom (which is slightly
310 more efficient), but with this option set, the muxer writes one moof/mdat
311 pair for each track, making it easier to separate tracks.
313 This option is implicitly set when writing ismv (Smooth Streaming) files.
314 @item -movflags faststart
315 Run a second pass moving the moov atom on top of the file. This
316 operation can take a while, and will not work in various situations such
317 as fragmented output, thus it is not enabled by default.
320 Smooth Streaming content can be pushed in real time to a publishing
321 point on IIS with this muxer. Example:
323 ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
328 MPEG transport stream muxer.
330 This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
332 The muxer options are:
335 @item -mpegts_original_network_id @var{number}
336 Set the original_network_id (default 0x0001). This is unique identifier
337 of a network in DVB. Its main use is in the unique identification of a
338 service through the path Original_Network_ID, Transport_Stream_ID.
339 @item -mpegts_transport_stream_id @var{number}
340 Set the transport_stream_id (default 0x0001). This identifies a
342 @item -mpegts_service_id @var{number}
343 Set the service_id (default 0x0001) also known as program in DVB.
344 @item -mpegts_pmt_start_pid @var{number}
345 Set the first PID for PMT (default 0x1000, max 0x1f00).
346 @item -mpegts_start_pid @var{number}
347 Set the first PID for data packets (default 0x0100, max 0x0f00).
350 The recognized metadata settings in mpegts muxer are @code{service_provider}
351 and @code{service_name}. If they are not set the default for
352 @code{service_provider} is "FFmpeg" and the default for
353 @code{service_name} is "Service01".
356 ffmpeg -i file.mpg -c copy \
357 -mpegts_original_network_id 0x1122 \
358 -mpegts_transport_stream_id 0x3344 \
359 -mpegts_service_id 0x5566 \
360 -mpegts_pmt_start_pid 0x1500 \
361 -mpegts_start_pid 0x150 \
362 -metadata service_provider="Some provider" \
363 -metadata service_name="Some Channel" \
371 This muxer does not generate any output file, it is mainly useful for
372 testing or benchmarking purposes.
374 For example to benchmark decoding with @command{ffmpeg} you can use the
377 ffmpeg -benchmark -i INPUT -f null out.null
380 Note that the above command does not read or write the @file{out.null}
381 file, but specifying the output file is required by the @command{ffmpeg}
384 Alternatively you can write the command as:
386 ffmpeg -benchmark -i INPUT -f null -
391 Matroska container muxer.
393 This muxer implements the matroska and webm container specs.
395 The recognized metadata settings in this muxer are:
399 @item title=@var{title name}
400 Name provided to a single track
405 @item language=@var{language name}
406 Specifies the language of the track in the Matroska languages form
411 @item stereo_mode=@var{mode}
412 Stereo 3D video layout of two views in a single video track
417 Both views are arranged side by side, Left-eye view is on the left
419 Both views are arranged in top-bottom orientation, Left-eye view is at bottom
421 Both views are arranged in top-bottom orientation, Left-eye view is on top
422 @item checkerboard_rl
423 Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
424 @item checkerboard_lr
425 Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
426 @item row_interleaved_rl
427 Each view is constituted by a row based interleaving, Right-eye view is first row
428 @item row_interleaved_lr
429 Each view is constituted by a row based interleaving, Left-eye view is first row
430 @item col_interleaved_rl
431 Both views are arranged in a column based interleaving manner, Right-eye view is first column
432 @item col_interleaved_lr
433 Both views are arranged in a column based interleaving manner, Left-eye view is first column
434 @item anaglyph_cyan_red
435 All frames are in anaglyph format viewable through red-cyan filters
437 Both views are arranged side by side, Right-eye view is on the left
438 @item anaglyph_green_magenta
439 All frames are in anaglyph format viewable through green-magenta filters
441 Both eyes laced in one Block, Left-eye view is first
443 Both eyes laced in one Block, Right-eye view is first
447 For example a 3D WebM clip can be created using the following command line:
449 ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
452 @section segment, stream_segment, ssegment
454 Basic stream segmenter.
456 The segmenter muxer outputs streams to a number of separate files of nearly
457 fixed duration. Output filename pattern can be set in a fashion similar to
460 @code{stream_segment} is a variant of the muxer used to write to
461 streaming output formats, i.e. which do not require global headers,
462 and is recommended for outputting e.g. to MPEG transport stream segments.
463 @code{ssegment} is a shorter alias for @code{stream_segment}.
465 Every segment starts with a video keyframe, if a video stream is present.
466 Note that if you want accurate splitting for a video file, you need to
467 make the input key frames correspond to the exact splitting times
468 expected by the segmenter, or the segment muxer will start the new
469 segment with the key frame found next after the specified start
472 The segment muxer works best with a single constant frame rate video.
474 Optionally it can generate a list of the created segments, by setting
475 the option @var{segment_list}. The list type is specified by the
476 @var{segment_list_type} option.
478 The segment muxer supports the following options:
481 @item segment_format @var{format}
482 Override the inner container format, by default it is guessed by the filename
484 @item segment_list @var{name}
485 Generate also a listfile named @var{name}. If not specified no
486 listfile is generated.
487 @item segment_list_flags @var{flags}
488 Set flags affecting the segment list generation.
490 It currently supports the following flags:
493 Allow caching (only affects M3U8 list files).
496 Allow live-friendly file generation.
498 This currently only affects M3U8 lists. In particular, write a fake
499 EXT-X-TARGETDURATION duration field at the top of the file, based on
500 the specified @var{segment_time}.
503 Default value is @code{cache}.
505 @item segment_list_size @var{size}
506 Overwrite the listfile once it reaches @var{size} entries. If 0
507 the listfile is never overwritten. Default value is 0.
508 @item segment_list type @var{type}
509 Specify the format for the segment list file.
511 The following values are recognized:
514 Generate a flat list for the created segments, one segment per line.
517 Generate a list for the created segments, one segment per line,
518 each line matching the format (comma-separated values):
520 @var{segment_filename},@var{segment_start_time},@var{segment_end_time}
523 @var{segment_filename} is the name of the output file generated by the
524 muxer according to the provided pattern. CSV escaping (according to
525 RFC4180) is applied if required.
527 @var{segment_start_time} and @var{segment_end_time} specify
528 the segment start and end time expressed in seconds.
530 A list file with the suffix @code{".csv"} or @code{".ext"} will
531 auto-select this format.
533 @code{ext} is deprecated in favor or @code{csv}.
536 Generate an extended M3U8 file, version 4, compliant with
537 @url{http://tools.ietf.org/id/draft-pantos-http-live-streaming-08.txt}.
539 A list file with the suffix @code{".m3u8"} will auto-select this format.
542 If not specified the type is guessed from the list file name suffix.
543 @item segment_time @var{time}
544 Set segment duration to @var{time}. Default value is "2".
545 @item segment_time_delta @var{delta}
546 Specify the accuracy time when selecting the start time for a
547 segment. Default value is "0".
549 When delta is specified a key-frame will start a new segment if its
550 PTS satisfies the relation:
552 PTS >= start_time - time_delta
555 This option is useful when splitting video content, which is always
556 split at GOP boundaries, in case a key frame is found just before the
557 specified split time.
559 In particular may be used in combination with the @file{ffmpeg} option
560 @var{force_key_frames}. The key frame times specified by
561 @var{force_key_frames} may not be set accurately because of rounding
562 issues, with the consequence that a key frame time may result set just
563 before the specified time. For constant frame rate videos a value of
564 1/2*@var{frame_rate} should address the worst case mismatch between
565 the specified time and the time set by @var{force_key_frames}.
567 @item segment_times @var{times}
568 Specify a list of split points. @var{times} contains a list of comma
569 separated duration specifications, in increasing order.
570 @item segment_wrap @var{limit}
571 Wrap around segment index once it reaches @var{limit}.
574 Some examples follow.
578 To remux the content of file @file{in.mkv} to a list of segments
579 @file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
580 generated segments to @file{out.list}:
582 ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
586 As the example above, but segment the input file according to the split
587 points specified by the @var{segment_times} option:
589 ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
593 As the example above, but use the @code{ffmpeg} @var{force_key_frames}
594 option to force key frames in the input at the specified location, together
595 with the segment option @var{segment_time_delta} to account for
596 possible roundings operated when setting key frame times.
598 ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -vcodec mpeg4 -acodec pcm_s16le -map 0 \
599 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
601 In order to force key frames on the input file, transcoding is
605 To convert the @file{in.mkv} to TS segments using the @code{libx264}
606 and @code{libfaac} encoders:
608 ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
612 Segment the input file, and create an M3U8 live playlist (can be used
615 ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
616 -segment_list_flags +live -segment_time 10 out%03d.mkv
622 The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
623 optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
624 @code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
625 not written by default, but may be enabled with the @code{write_id3v1} option.
627 For seekable output the muxer also writes a Xing frame at the beginning, which
628 contains the number of frames in the file. It is useful for computing duration
631 The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
632 are supplied to the muxer in form of a video stream with a single packet. There
633 can be any number of those streams, each will correspond to a single APIC frame.
634 The stream metadata tags @var{title} and @var{comment} map to APIC
635 @var{description} and @var{picture type} respectively. See
636 @url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
638 Note that the APIC frames must be written at the beginning, so the muxer will
639 buffer the audio frames until it gets all the pictures. It is therefore advised
640 to provide the pictures as soon as possible to avoid excessive buffering.
644 Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
646 ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
649 Attach a picture to an mp3:
651 ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"
652 -metadata:s:v comment="Cover (Front)" out.mp3