X-Git-Url: https://git.sesse.net/?a=blobdiff_plain;f=doc%2Fprotocols.texi;h=e2d06a067535dce8ad53cb5a25ea4e32db5c9ea1;hb=52ed83fa1a7f5170447eff6fad0b6c57119596e9;hp=88aedb930ad4ebb9955533a2eb7b4bad39ec01a4;hpb=f6833fc1c68f119d266075dcfd729ff4440ac864;p=ffmpeg diff --git a/doc/protocols.texi b/doc/protocols.texi index 88aedb930ad..e2d06a06753 100644 --- a/doc/protocols.texi +++ b/doc/protocols.texi @@ -1,10 +1,10 @@ @chapter Protocols @c man begin PROTOCOLS -Protocols are configured elements in FFmpeg which allow to access +Protocols are configured elements in Libav which allow to access resources which require the use of a particular protocol. -When you configure your FFmpeg build, all the supported protocols are +When you configure your Libav build, all the supported protocols are enabled by default. You can list all available ones using the configure option "--list-protocols". @@ -14,9 +14,17 @@ option "--enable-protocol=@var{PROTOCOL}", or you can disable a particular protocol using the option "--disable-protocol=@var{PROTOCOL}". -The option "-protocols" of the ff* tools will display the list of +The option "-protocols" of the av* tools will display the list of supported protocols. +All protocols accept the following options: + +@table @option +@item rw_timeout +Maximum time to wait for (network) read/write operations to complete, +in microseconds. +@end table + A description of the currently available protocols follows. @section concat @@ -36,10 +44,10 @@ resource to be concatenated, each one possibly specifying a distinct protocol. For example to read a sequence of files @file{split1.mpeg}, -@file{split2.mpeg}, @file{split3.mpeg} with @file{ffplay} use the +@file{split2.mpeg}, @file{split3.mpeg} with @command{avplay} use the command: @example -ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg +avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg @end example Note that you may need to escape the character "|" which is special for @@ -51,24 +59,144 @@ File access protocol. Allow to read from or read to a file. -For example to read from a file @file{input.mpeg} with @file{ffmpeg} +For example to read from a file @file{input.mpeg} with @command{avconv} use the command: @example -ffmpeg -i file:input.mpeg output.mpeg +avconv -i file:input.mpeg output.mpeg @end example -The ff* tools default to the file protocol, that is a resource +The av* tools default to the file protocol, that is a resource specified with the name "FILE.mpeg" is interpreted as the URL "file:FILE.mpeg". +This protocol accepts the following options: + +@table @option +@item follow +If set to 1, the protocol will retry reading at the end of the file, allowing +reading files that still are being written. In order for this to terminate, +you either need to use the rw_timeout option, or use the interrupt callback +(for API users). + +@end table + @section gopher Gopher protocol. +@section hls + +Read Apple HTTP Live Streaming compliant segmented stream as +a uniform one. The M3U8 playlists describing the segments can be +remote HTTP resources or local files, accessed using the standard +file protocol. +The nested protocol is declared by specifying +"+@var{proto}" after the hls URI scheme name, where @var{proto} +is either "file" or "http". + +@example +hls+http://host/path/to/remote/resource.m3u8 +hls+file://path/to/local/resource.m3u8 +@end example + +Using this protocol is discouraged - the hls demuxer should work +just as well (if not, please report the issues) and is more complete. +To use the hls demuxer instead, simply use the direct URLs to the +m3u8 files. + @section http HTTP (Hyper Text Transfer Protocol). +This protocol accepts the following options: + +@table @option +@item chunked_post +If set to 1 use chunked Transfer-Encoding for posts, default is 1. + +@item content_type +Set a specific content type for the POST messages. + +@item headers +Set custom HTTP headers, can override built in default headers. The +value must be a string encoding the headers. + +@item multiple_requests +Use persistent connections if set to 1, default is 0. + +@item post_data +Set custom HTTP post data. + +@item user_agent +Override the User-Agent header. If not specified a string of the form +"Lavf/" will be used. + +@item mime_type +Export the MIME type. + +@item icy +If set to 1 request ICY (SHOUTcast) metadata from the server. If the server +supports this, the metadata has to be retrieved by the application by reading +the @option{icy_metadata_headers} and @option{icy_metadata_packet} options. +The default is 1. + +@item icy_metadata_headers +If the server supports ICY metadata, this contains the ICY-specific HTTP reply +headers, separated by newline characters. + +@item icy_metadata_packet +If the server supports ICY metadata, and @option{icy} was set to 1, this +contains the last non-empty metadata packet sent by the server. It should be +polled in regular intervals by applications interested in mid-stream metadata +updates. + +@item offset +Set initial byte offset. + +@item end_offset +Try to limit the request to bytes preceding this offset. +@end table + +@section Icecast + +Icecast (stream to Icecast servers) + +This protocol accepts the following options: + +@table @option +@item ice_genre +Set the stream genre. + +@item ice_name +Set the stream name. + +@item ice_description +Set the stream description. + +@item ice_url +Set the stream website URL. + +@item ice_public +Set if the stream should be public or not. +The default is 0 (not public). + +@item user_agent +Override the User-Agent header. If not specified a string of the form +"Lavf/" will be used. + +@item password +Set the Icecast mountpoint password. + +@item content_type +Set the stream content type. This must be set if it is different from +audio/mpeg. + +@item legacy_icecast +This enables support for Icecast versions < 2.4.0, that do not support the +HTTP PUT method but the SOURCE method. + +@end table + @section mmst MMS (Microsoft Media Server) protocol over TCP. @@ -93,10 +221,10 @@ be used to test muxers without writing an actual file. Some examples follow. @example # Write the MD5 hash of the encoded AVI file to the file output.avi.md5. -ffmpeg -i input.flv -f avi -y md5:output.avi.md5 +avconv -i input.flv -f avi -y md5:output.avi.md5 # Write the MD5 hash of the encoded AVI file to stdout. -ffmpeg -i input.flv -f avi -y md5: +avconv -i input.flv -f avi -y md5: @end example Note that some formats (typically MOV) require the output protocol to @@ -118,18 +246,18 @@ pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number} is not specified, by default the stdout file descriptor will be used for writing, stdin for reading. -For example to read from stdin with @file{ffmpeg}: +For example to read from stdin with @command{avconv}: @example -cat test.wav | ffmpeg -i pipe:0 +cat test.wav | avconv -i pipe:0 # ...this is the same as... -cat test.wav | ffmpeg -i pipe: +cat test.wav | avconv -i pipe: @end example -For writing to stdout with @file{ffmpeg}: +For writing to stdout with @command{avconv}: @example -ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi +avconv -i test.wav -f avi pipe:1 | cat > test.avi # ...this is the same as... -ffmpeg -i test.wav -f avi pipe: | cat > test.avi +avconv -i test.wav -f avi pipe: | cat > test.avi @end example Note that some formats (typically MOV), require the output protocol to @@ -139,17 +267,23 @@ be seekable, so they will fail with the pipe output protocol. Real-Time Messaging Protocol. -The Real-Time Messaging Protocol (RTMP) is used for streaming multime‐ -dia content across a TCP/IP network. +The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia +content across a TCP/IP network. The required syntax is: @example -rtmp://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] +rtmp://[@var{username}:@var{password}@@]@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}] @end example The accepted parameters are: @table @option +@item username +An optional username (mostly for publishing). + +@item password +An optional password (mostly for publishing). + @item server The address of the RTMP server. @@ -159,27 +293,151 @@ The number of the TCP port to use (by default is 1935). @item app It is the name of the application to access. It usually corresponds to the path where the application is installed on the RTMP server -(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). +(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override +the value parsed from the URI through the @code{rtmp_app} option, too. @item playpath It is the path or name of the resource to play with reference to the -application specified in @var{app}, may be prefixed by "mp4:". +application specified in @var{app}, may be prefixed by "mp4:". You +can override the value parsed from the URI through the @code{rtmp_playpath} +option, too. + +@item listen +Act as a server, listening for an incoming connection. + +@item timeout +Maximum time to wait for the incoming connection. Implies listen. +@end table + +Additionally, the following parameters can be set via command line options +(or in code via @code{AVOption}s): +@table @option + +@item rtmp_app +Name of application to connect on the RTMP server. This option +overrides the parameter specified in the URI. + +@item rtmp_buffer +Set the client buffer time in milliseconds. The default is 3000. + +@item rtmp_conn +Extra arbitrary AMF connection parameters, parsed from a string, +e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}. +Each value is prefixed by a single character denoting the type, +B for Boolean, N for number, S for string, O for object, or Z for null, +followed by a colon. For Booleans the data must be either 0 or 1 for +FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or +1 to end or begin an object, respectively. Data items in subobjects may +be named, by prefixing the type with 'N' and specifying the name before +the value (i.e. @code{NB:myFlag:1}). This option may be used multiple +times to construct arbitrary AMF sequences. + +@item rtmp_flashver +Version of the Flash plugin used to run the SWF player. The default +is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible; +).) + +@item rtmp_flush_interval +Number of packets flushed in the same request (RTMPT only). The default +is 10. + +@item rtmp_live +Specify that the media is a live stream. No resuming or seeking in +live streams is possible. The default value is @code{any}, which means the +subscriber first tries to play the live stream specified in the +playpath. If a live stream of that name is not found, it plays the +recorded stream. The other possible values are @code{live} and +@code{recorded}. + +@item rtmp_pageurl +URL of the web page in which the media was embedded. By default no +value will be sent. + +@item rtmp_playpath +Stream identifier to play or to publish. This option overrides the +parameter specified in the URI. + +@item rtmp_subscribe +Name of live stream to subscribe to. By default no value will be sent. +It is only sent if the option is specified or if rtmp_live +is set to live. + +@item rtmp_swfhash +SHA256 hash of the decompressed SWF file (32 bytes). + +@item rtmp_swfsize +Size of the decompressed SWF file, required for SWFVerification. + +@item rtmp_swfurl +URL of the SWF player for the media. By default no value will be sent. + +@item rtmp_swfverify +URL to player swf file, compute hash/size automatically. + +@item rtmp_tcurl +URL of the target stream. Defaults to proto://host[:port]/app. @end table -For example to read with @file{ffplay} a multimedia resource named +For example to read with @command{avplay} a multimedia resource named "sample" from the application "vod" from an RTMP server "myserver": @example -ffplay rtmp://myserver/vod/sample +avplay rtmp://myserver/vod/sample +@end example + +To publish to a password protected server, passing the playpath and +app names separately: +@example +avconv -re -i -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@@myserver/ @end example -@section rtmp, rtmpe, rtmps, rtmpt, rtmpte +@section rtmpe + +Encrypted Real-Time Messaging Protocol. + +The Encrypted Real-Time Messaging Protocol (RTMPE) is used for +streaming multimedia content within standard cryptographic primitives, +consisting of Diffie-Hellman key exchange and HMACSHA256, generating +a pair of RC4 keys. + +@section rtmps + +Real-Time Messaging Protocol over a secure SSL connection. + +The Real-Time Messaging Protocol (RTMPS) is used for streaming +multimedia content across an encrypted connection. + +@section rtmpt + +Real-Time Messaging Protocol tunneled through HTTP. + +The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used +for streaming multimedia content within HTTP requests to traverse +firewalls. + +@section rtmpte + +Encrypted Real-Time Messaging Protocol tunneled through HTTP. + +The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE) +is used for streaming multimedia content within HTTP requests to traverse +firewalls. + +@section rtmpts + +Real-Time Messaging Protocol tunneled through HTTPS. + +The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used +for streaming multimedia content within HTTPS requests to traverse +firewalls. + +@section librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte Real-Time Messaging Protocol and its variants supported through librtmp. Requires the presence of the librtmp headers and library during -configuration. You need to explicitely configure the build with +configuration. You need to explicitly configure the build with "--enable-librtmp". If enabled this will replace the native RTMP protocol. @@ -203,14 +461,14 @@ meaning as specified for the RTMP native protocol. See the librtmp manual page (man 3 librtmp) for more information. For example, to stream a file in real-time to an RTMP server using -@file{ffmpeg}: +@command{avconv}: @example -ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream +avconv -re -i myfile -f flv rtmp://myserver/live/mystream @end example -To play the same stream using @file{ffplay}: +To play the same stream using @command{avplay}: @example -ffplay "rtmp://myserver/live/mystream live=1" +avplay "rtmp://myserver/live/mystream live=1" @end example @section rtp @@ -226,16 +484,19 @@ data transferred over RDT). The muxer can be used to send a stream using RTSP ANNOUNCE to a server supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's -RTSP server, @url{http://github.com/revmischa/rtsp-server}). +@uref{http://github.com/revmischa/rtsp-server, RTSP server}). The required syntax for a RTSP url is: @example -rtsp://@var{hostname}[:@var{port}]/@var{path}[?@var{options}] +rtsp://@var{hostname}[:@var{port}]/@var{path} @end example -@var{options} is a @code{&}-separated list. The following options +The following options (set on the @command{avconv}/@command{avplay} command +line, or set in code via @code{AVOption}s or in @code{avformat_open_input}), are supported: +Flags for @code{rtsp_transport}: + @table @option @item udp @@ -245,7 +506,7 @@ Use UDP as lower transport protocol. Use TCP (interleaving within the RTSP control channel) as lower transport protocol. -@item multicast +@item udp_multicast Use UDP multicast as lower transport protocol. @item http @@ -257,12 +518,21 @@ Multiple lower transport protocols may be specified, in that case they are tried one at a time (if the setup of one fails, the next one is tried). For the muxer, only the @code{tcp} and @code{udp} options are supported. +Flags for @code{rtsp_flags}: + +@table @option +@item filter_src +Accept packets only from negotiated peer address and port. +@item listen +Act as a server, listening for an incoming connection. +@end table + When receiving data over UDP, the demuxer tries to reorder received packets -(since they may arrive out of order, or packets may get lost totally). In -order for this to be enabled, a maximum delay must be specified in the -@code{max_delay} field of AVFormatContext. +(since they may arrive out of order, or packets may get lost totally). This +can be disabled by setting the maximum demuxing delay to zero (via +the @code{max_delay} field of AVFormatContext). -When watching multi-bitrate Real-RTSP streams with @file{ffplay}, the +When watching multi-bitrate Real-RTSP streams with @command{avplay}, the streams to display can be chosen with @code{-vst} @var{n} and @code{-ast} @var{n} for video and audio respectively, and can be switched on the fly by pressing @code{v} and @code{a}. @@ -272,24 +542,325 @@ Example command lines: To watch a stream over UDP, with a max reordering delay of 0.5 seconds: @example -ffplay -max_delay 500000 rtsp://server/video.mp4?udp +avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4 @end example To watch a stream tunneled over HTTP: @example -ffplay rtsp://server/video.mp4?http +avplay -rtsp_transport http rtsp://server/video.mp4 @end example To send a stream in realtime to a RTSP server, for others to watch: @example -ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp +avconv -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp +@end example + +To receive a stream in realtime: + +@example +avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output} @end example +@section sap + +Session Announcement Protocol (RFC 2974). This is not technically a +protocol handler in libavformat, it is a muxer and demuxer. +It is used for signalling of RTP streams, by announcing the SDP for the +streams regularly on a separate port. + +@subsection Muxer + +The syntax for a SAP url given to the muxer is: +@example +sap://@var{destination}[:@var{port}][?@var{options}] +@end example + +The RTP packets are sent to @var{destination} on port @var{port}, +or to port 5004 if no port is specified. +@var{options} is a @code{&}-separated list. The following options +are supported: + +@table @option + +@item announce_addr=@var{address} +Specify the destination IP address for sending the announcements to. +If omitted, the announcements are sent to the commonly used SAP +announcement multicast address 224.2.127.254 (sap.mcast.net), or +ff0e::2:7ffe if @var{destination} is an IPv6 address. + +@item announce_port=@var{port} +Specify the port to send the announcements on, defaults to +9875 if not specified. + +@item ttl=@var{ttl} +Specify the time to live value for the announcements and RTP packets, +defaults to 255. + +@item same_port=@var{0|1} +If set to 1, send all RTP streams on the same port pair. If zero (the +default), all streams are sent on unique ports, with each stream on a +port 2 numbers higher than the previous. +VLC/Live555 requires this to be set to 1, to be able to receive the stream. +The RTP stack in libavformat for receiving requires all streams to be sent +on unique ports. +@end table + +Example command lines follow. + +To broadcast a stream on the local subnet, for watching in VLC: + +@example +avconv -re -i @var{input} -f sap sap://224.0.0.255?same_port=1 +@end example + +Similarly, for watching in avplay: + +@example +avconv -re -i @var{input} -f sap sap://224.0.0.255 +@end example + +And for watching in avplay, over IPv6: + +@example +avconv -re -i @var{input} -f sap sap://[ff0e::1:2:3:4] +@end example + +@subsection Demuxer + +The syntax for a SAP url given to the demuxer is: +@example +sap://[@var{address}][:@var{port}] +@end example + +@var{address} is the multicast address to listen for announcements on, +if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port} +is the port that is listened on, 9875 if omitted. + +The demuxers listens for announcements on the given address and port. +Once an announcement is received, it tries to receive that particular stream. + +Example command lines follow. + +To play back the first stream announced on the normal SAP multicast address: + +@example +avplay sap:// +@end example + +To play back the first stream announced on one the default IPv6 SAP multicast address: + +@example +avplay sap://[ff0e::2:7ffe] +@end example + +@section srt + +Haivision Secure Reliable Transport Protocol via libsrt. + +The supported syntax for a SRT URL is: +@example +srt://@var{hostname}:@var{port}[?@var{options}] +@end example + +@var{options} contains a list of &-separated options of the form +@var{key}=@var{val}. + +or + +@example +@var{options} srt://@var{hostname}:@var{port} +@end example + +@var{options} contains a list of '-@var{key} @var{val}' +options. + +This protocol accepts the following options. + +@table @option +@item connect_timeout +Connection timeout; SRT cannot connect for RTT > 1500 msec +(2 handshake exchanges) with the default connect timeout of +3 seconds. This option applies to the caller and rendezvous +connection modes. The connect timeout is 10 times the value +set for the rendezvous mode (which can be used as a +workaround for this connection problem with earlier versions). + +@item ffs=@var{bytes} +Flight Flag Size (Window Size), in bytes. FFS is actually an +internal parameter and you should set it to not less than +@option{recv_buffer_size} and @option{mss}. The default value +is relatively large, therefore unless you set a very large receiver buffer, +you do not need to change this option. Default value is 25600. + +@item inputbw=@var{bytes/seconds} +Sender nominal input rate, in bytes per seconds. Used along with +@option{oheadbw}, when @option{maxbw} is set to relative (0), to +calculate maximum sending rate when recovery packets are sent +along with the main media stream: +@option{inputbw} * (100 + @option{oheadbw}) / 100 +if @option{inputbw} is not set while @option{maxbw} is set to +relative (0), the actual input rate is evaluated inside +the library. Default value is 0. + +@item iptos=@var{tos} +IP Type of Service. Applies to sender only. Default value is 0xB8. + +@item ipttl=@var{ttl} +IP Time To Live. Applies to sender only. Default value is 64. + +@item listen_timeout +Set socket listen timeout. + +@item maxbw=@var{bytes/seconds} +Maximum sending bandwidth, in bytes per seconds. +-1 infinite (CSRTCC limit is 30mbps) +0 relative to input rate (see @option{inputbw}) +>0 absolute limit value +Default value is 0 (relative) + +@item mode=@var{caller|listener|rendezvous} +Connection mode. +@option{caller} opens client connection. +@option{listener} starts server to listen for incoming connections. +@option{rendezvous} use Rendez-Vous connection mode. +Default value is caller. + +@item mss=@var{bytes} +Maximum Segment Size, in bytes. Used for buffer allocation +and rate calculation using a packet counter assuming fully +filled packets. The smallest MSS between the peers is +used. This is 1500 by default in the overall internet. +This is the maximum size of the UDP packet and can be +only decreased, unless you have some unusual dedicated +network settings. Default value is 1500. + +@item nakreport=@var{1|0} +If set to 1, Receiver will send `UMSG_LOSSREPORT` messages +periodically until a lost packet is retransmitted or +intentionally dropped. Default value is 1. + +@item oheadbw=@var{percents} +Recovery bandwidth overhead above input rate, in percents. +See @option{inputbw}. Default value is 25%. + +@item passphrase=@var{string} +HaiCrypt Encryption/Decryption Passphrase string, length +from 10 to 79 characters. The passphrase is the shared +secret between the sender and the receiver. It is used +to generate the Key Encrypting Key using PBKDF2 +(Password-Based Key Derivation Function). It is used +only if @option{pbkeylen} is non-zero. It is used on +the receiver only if the received data is encrypted. +The configured passphrase cannot be recovered (write-only). + +@item pbkeylen=@var{bytes} +Sender encryption key length, in bytes. +Only can be set to 0, 16, 24 and 32. +Enable sender encryption if not 0. +Not required on receiver (set to 0), +key size obtained from sender in HaiCrypt handshake. +Default value is 0. + +@item recv_buffer_size=@var{bytes} +Set receive buffer size, expressed in bytes. + +@item send_buffer_size=@var{bytes} +Set send buffer size, expressed in bytes. + +@item rw_timeout +Set raise error timeout for read/write optations. + +This option is only relevant in read mode: +if no data arrived in more than this time +interval, raise error. + +@item tlpktdrop=@var{1|0} +Too-late Packet Drop. When enabled on receiver, it skips +missing packets that have not been delivered in time and +delivers the following packets to the application when +their time-to-play has come. It also sends a fake ACK to +the sender. When enabled on sender and enabled on the +receiving peer, the sender drops the older packets that +have no chance of being delivered in time. It was +automatically enabled in the sender if the receiver +supports it. + +@item tsbpddelay +Timestamp-based Packet Delivery Delay. +Used to absorb burst of missed packet retransmission. + +@end table + +For more information see: @url{https://github.com/Haivision/srt}. + @section tcp -Trasmission Control Protocol. +Transmission Control Protocol. + +The required syntax for a TCP url is: +@example +tcp://@var{hostname}:@var{port}[?@var{options}] +@end example + +@table @option + +@item listen +Listen for an incoming connection + +@example +avconv -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen +avplay tcp://@var{hostname}:@var{port} +@end example + +@end table + +@section tls + +Transport Layer Security (TLS) / Secure Sockets Layer (SSL) + +The required syntax for a TLS url is: +@example +tls://@var{hostname}:@var{port} +@end example + +The following parameters can be set via command line options +(or in code via @code{AVOption}s): + +@table @option + +@item ca_file +A file containing certificate authority (CA) root certificates to treat +as trusted. If the linked TLS library contains a default this might not +need to be specified for verification to work, but not all libraries and +setups have defaults built in. + +@item tls_verify=@var{1|0} +If enabled, try to verify the peer that we are communicating with. +Note, if using OpenSSL, this currently only makes sure that the +peer certificate is signed by one of the root certificates in the CA +database, but it does not validate that the certificate actually +matches the host name we are trying to connect to. (With GnuTLS, +the host name is validated as well.) + +This is disabled by default since it requires a CA database to be +provided by the caller in many cases. + +@item cert_file +A file containing a certificate to use in the handshake with the peer. +(When operating as server, in listen mode, this is more often required +by the peer, while client certificates only are mandated in certain +setups.) + +@item key_file +A file containing the private key for the certificate. + +@item listen=@var{1|0} +If enabled, listen for connections on the provided port, and assume +the server role in the handshake instead of the client role. + +@end table @section udp @@ -300,7 +871,7 @@ The required syntax for a UDP url is: udp://@var{hostname}:@var{port}[?@var{options}] @end example -@var{options} contains a list of &-seperated options of the form @var{key}=@var{val}. +@var{options} contains a list of &-separated options of the form @var{key}=@var{val}. Follow the list of supported options. @table @option @@ -311,6 +882,11 @@ set the UDP buffer size in bytes @item localport=@var{port} override the local UDP port to bind with +@item localaddr=@var{addr} +Choose the local IP address. This is useful e.g. if sending multicast +and the host has multiple interfaces, where the user can choose +which interface to send on by specifying the IP address of that interface. + @item pkt_size=@var{size} set the size in bytes of UDP packets @@ -322,27 +898,59 @@ set the time to live value (for multicast only) @item connect=@var{1|0} Initialize the UDP socket with @code{connect()}. In this case, the -destination address can't be changed with udp_set_remote_url later. +destination address can't be changed with ff_udp_set_remote_url later. +If the destination address isn't known at the start, this option can +be specified in ff_udp_set_remote_url, too. This allows finding out the source address for the packets with getsockname, and makes writes return with AVERROR(ECONNREFUSED) if "destination unreachable" is received. +For receiving, this gives the benefit of only receiving packets from +the specified peer address/port. + +@item sources=@var{address}[,@var{address}] +Only receive packets sent to the multicast group from one of the +specified sender IP addresses. + +@item block=@var{address}[,@var{address}] +Ignore packets sent to the multicast group from the specified +sender IP addresses. @end table -Some usage examples of the udp protocol with @file{ffmpeg} follow. +Some usage examples of the udp protocol with @command{avconv} follow. To stream over UDP to a remote endpoint: @example -ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port} +avconv -i @var{input} -f @var{format} udp://@var{hostname}:@var{port} @end example To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer: @example -ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535 +avconv -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535 @end example To receive over UDP from a remote endpoint: @example -ffmpeg -i udp://[@var{multicast-address}]:@var{port} +avconv -i udp://[@var{multicast-address}]:@var{port} @end example +@section unix + +Unix local socket + +The required syntax for a Unix socket URL is: + +@example +unix://@var{filepath} +@end example + +The following parameters can be set via command line options +(or in code via @code{AVOption}s): + +@table @option +@item timeout +Timeout in ms. +@item listen +Create the Unix socket in listening mode. +@end table + @c man end PROTOCOLS