4 Protocols are configured elements in FFmpeg which allow to access
5 resources which require the use of a particular protocol.
7 When you configure your FFmpeg build, all the supported protocols are
8 enabled by default. You can list all available ones using the
9 configure option "--list-protocols".
11 You can disable all the protocols using the configure option
12 "--disable-protocols", and selectively enable a protocol using the
13 option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14 particular protocol using the option
15 "--disable-protocol=@var{PROTOCOL}".
17 The option "-protocols" of the ff* tools will display the list of
20 A description of the currently available protocols follows.
24 Physical concatenation protocol.
26 Allow to read and seek from many resource in sequence as if they were
29 A URL accepted by this protocol has the syntax:
31 concat:@var{URL1}|@var{URL2}|...|@var{URLN}
34 where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35 resource to be concatenated, each one possibly specifying a distinct
38 For example to read a sequence of files @file{split1.mpeg},
39 @file{split2.mpeg}, @file{split3.mpeg} with @file{ffplay} use the
42 ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
45 Note that you may need to escape the character "|" which is special for
52 Allow to read from or read to a file.
54 For example to read from a file @file{input.mpeg} with @file{ffmpeg}
57 ffmpeg -i file:input.mpeg output.mpeg
60 The ff* tools default to the file protocol, that is a resource
61 specified with the name "FILE.mpeg" is interpreted as the URL
70 HTTP (Hyper Text Transfer Protocol).
74 MMS (Microsoft Media Server) protocol over TCP.
78 MMS (Microsoft Media Server) protocol over HTTP.
80 The required syntax is:
82 mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
89 Computes the MD5 hash of the data to be written, and on close writes
90 this to the designated output or stdout if none is specified. It can
91 be used to test muxers without writing an actual file.
95 # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
96 ffmpeg -i input.flv -f avi -y md5:output.avi.md5
98 # Write the MD5 hash of the encoded AVI file to stdout.
99 ffmpeg -i input.flv -f avi -y md5:
102 Note that some formats (typically MOV) require the output protocol to
103 be seekable, so they will fail with the MD5 output protocol.
107 UNIX pipe access protocol.
109 Allow to read and write from UNIX pipes.
111 The accepted syntax is:
116 @var{number} is the number corresponding to the file descriptor of the
117 pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
118 is not specified, by default the stdout file descriptor will be used
119 for writing, stdin for reading.
121 For example to read from stdin with @file{ffmpeg}:
123 cat test.wav | ffmpeg -i pipe:0
124 # ...this is the same as...
125 cat test.wav | ffmpeg -i pipe:
128 For writing to stdout with @file{ffmpeg}:
130 ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
131 # ...this is the same as...
132 ffmpeg -i test.wav -f avi pipe: | cat > test.avi
135 Note that some formats (typically MOV), require the output protocol to
136 be seekable, so they will fail with the pipe output protocol.
140 Real-Time Messaging Protocol.
142 The Real-Time Messaging Protocol (RTMP) is used for streaming multimeā
143 dia content across a TCP/IP network.
145 The required syntax is:
147 rtmp://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
150 The accepted parameters are:
154 The address of the RTMP server.
157 The number of the TCP port to use (by default is 1935).
160 It is the name of the application to access. It usually corresponds to
161 the path where the application is installed on the RTMP server
162 (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.).
165 It is the path or name of the resource to play with reference to the
166 application specified in @var{app}, may be prefixed by "mp4:".
170 For example to read with @file{ffplay} a multimedia resource named
171 "sample" from the application "vod" from an RTMP server "myserver":
173 ffplay rtmp://myserver/vod/sample
176 @section rtmp, rtmpe, rtmps, rtmpt, rtmpte
178 Real-Time Messaging Protocol and its variants supported through
181 Requires the presence of the librtmp headers and library during
182 configuration. You need to explicitely configure the build with
183 "--enable-librtmp". If enabled this will replace the native RTMP
186 This protocol provides most client functions and a few server
187 functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
188 encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
189 variants of these encrypted types (RTMPTE, RTMPTS).
191 The required syntax is:
193 @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
196 where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
197 "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
198 @var{server}, @var{port}, @var{app} and @var{playpath} have the same
199 meaning as specified for the RTMP native protocol.
200 @var{options} contains a list of space-separated options of the form
203 See the librtmp manual page (man 3 librtmp) for more information.
205 For example, to stream a file in real-time to an RTMP server using
208 ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
211 To play the same stream using @file{ffplay}:
213 ffplay "rtmp://myserver/live/mystream live=1"
222 RTSP is not technically a protocol handler in libavformat, it is a demuxer
223 and muxer. The demuxer supports both normal RTSP (with data transferred
224 over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
225 data transferred over RDT).
227 The muxer can be used to send a stream using RTSP ANNOUNCE to a server
228 supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
229 RTSP server, @url{http://github.com/revmischa/rtsp-server}).
231 The required syntax for a RTSP url is:
233 rtsp://@var{hostname}[:@var{port}]/@var{path}[?@var{options}]
236 @var{options} is a @code{&}-separated list. The following options
242 Use UDP as lower transport protocol.
245 Use TCP (interleaving within the RTSP control channel) as lower
249 Use UDP multicast as lower transport protocol.
252 Use HTTP tunneling as lower transport protocol, which is useful for
256 Accept packets only from negotiated peer address and port.
259 Multiple lower transport protocols may be specified, in that case they are
260 tried one at a time (if the setup of one fails, the next one is tried).
261 For the muxer, only the @code{tcp} and @code{udp} options are supported.
263 When receiving data over UDP, the demuxer tries to reorder received packets
264 (since they may arrive out of order, or packets may get lost totally). In
265 order for this to be enabled, a maximum delay must be specified in the
266 @code{max_delay} field of AVFormatContext.
268 When watching multi-bitrate Real-RTSP streams with @file{ffplay}, the
269 streams to display can be chosen with @code{-vst} @var{n} and
270 @code{-ast} @var{n} for video and audio respectively, and can be switched
271 on the fly by pressing @code{v} and @code{a}.
273 Example command lines:
275 To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
278 ffplay -max_delay 500000 rtsp://server/video.mp4?udp
281 To watch a stream tunneled over HTTP:
284 ffplay rtsp://server/video.mp4?http
287 To send a stream in realtime to a RTSP server, for others to watch:
290 ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
295 Session Announcement Protocol (RFC 2974). This is not technically a
296 protocol handler in libavformat, it is a muxer and demuxer.
297 It is used for signalling of RTP streams, by announcing the SDP for the
298 streams regularly on a separate port.
302 The syntax for a SAP url given to the muxer is:
304 sap://@var{destination}[:@var{port}][?@var{options}]
307 The RTP packets are sent to @var{destination} on port @var{port},
308 or to port 5004 if no port is specified.
309 @var{options} is a @code{&}-separated list. The following options
314 @item announce_addr=@var{address}
315 Specify the destination IP address for sending the announcements to.
316 If omitted, the announcements are sent to the commonly used SAP
317 announcement multicast address 224.2.127.254 (sap.mcast.net), or
318 ff0e::2:7ffe if @var{destination} is an IPv6 address.
320 @item announce_port=@var{port}
321 Specify the port to send the announcements on, defaults to
322 9875 if not specified.
325 Specify the time to live value for the announcements and RTP packets,
328 @item same_port=@var{0|1}
329 If set to 1, send all RTP streams on the same port pair. If zero (the
330 default), all streams are sent on unique ports, with each stream on a
331 port 2 numbers higher than the previous.
332 VLC/Live555 requires this to be set to 1, to be able to receive the stream.
333 The RTP stack in libavformat for receiving requires all streams to be sent
337 Example command lines follow.
339 To broadcast a stream on the local subnet, for watching in VLC:
342 ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
345 Similarly, for watching in ffplay:
348 ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
351 And for watching in ffplay, over IPv6:
354 ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
359 The syntax for a SAP url given to the demuxer is:
361 sap://[@var{address}][:@var{port}]
364 @var{address} is the multicast address to listen for announcements on,
365 if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
366 is the port that is listened on, 9875 if omitted.
368 The demuxers listens for announcements on the given address and port.
369 Once an announcement is received, it tries to receive that particular stream.
371 Example command lines follow.
373 To play back the first stream announced on the normal SAP multicast address:
379 To play back the first stream announced on one the default IPv6 SAP multicast address:
382 ffplay sap://[ff0e::2:7ffe]
387 Trasmission Control Protocol.
391 User Datagram Protocol.
393 The required syntax for a UDP url is:
395 udp://@var{hostname}:@var{port}[?@var{options}]
398 @var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
399 Follow the list of supported options.
403 @item buffer_size=@var{size}
404 set the UDP buffer size in bytes
406 @item localport=@var{port}
407 override the local UDP port to bind with
409 @item pkt_size=@var{size}
410 set the size in bytes of UDP packets
412 @item reuse=@var{1|0}
413 explicitly allow or disallow reusing UDP sockets
416 set the time to live value (for multicast only)
418 @item connect=@var{1|0}
419 Initialize the UDP socket with @code{connect()}. In this case, the
420 destination address can't be changed with udp_set_remote_url later.
421 If the destination address isn't known at the start, this option can
422 be specified in udp_set_remote_url, too.
423 This allows finding out the source address for the packets with getsockname,
424 and makes writes return with AVERROR(ECONNREFUSED) if "destination
425 unreachable" is received.
426 For receiving, this gives the benefit of only receiving packets from
427 the specified peer address/port.
430 Some usage examples of the udp protocol with @file{ffmpeg} follow.
432 To stream over UDP to a remote endpoint:
434 ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
437 To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
439 ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
442 To receive over UDP from a remote endpoint:
444 ffmpeg -i udp://[@var{multicast-address}]:@var{port}