Set close-on-exec on all file descriptors we open.
This is useful when we're opening up to fork off child processes,
to avoid various sockets etc. leaking into them (without having to
close all of them explicitly).
Fix a crash when trying to get HLS fragments from a disconnected strema.
After the last patch, we would try to serve HLS fragments even if the
backend is down, but we had zeroed out the HTTP header, causing an
assertion failure with HTTP/1.1 clients (and an invalid HTTP response
for HTTP/1.0 clients). Fix by keeping the header and setting a special
“unavailable” flag instead (which allows us to keep sending HLS fragments).
Keep the HLS backlog even if the stream header changes.
The typical case here is if we are streaming fMP4 and the encoder
has to restart. The regular (non-HLS) stream has to cut off the
backlog here, but HLS supports multiple headers, so we can add a
discontinuity to keep the backlog seekable.
I haven't actually found a client that will play correctly across
the discontinuity, but hls.js can at least _seek_ across it,
which is already useful.
Force input encoding for UDP streams to raw already at config parsing.
Otherwise, it would be counted as src_encoding=metacube at the point
where we matched it up with serialized streams, causing problems with
the stream being dropped on reload.
This bumps the TLS requirement to >= 4.17 (tested with 4.17.0-rc4),
and doesn't really help all that much (neither in performance nor
complexity), but there's no point in supporting a separate non-RX-capable
path for 4.13 through 4.16.
Update README; 10gig isn't even hard anymore these days, the kernels are so
fast. Andre Tomt verified ~40 Gbit/sec with kTLS on a quadcore 45W Xeon D,
with no particular tuning.
We'd either never time out clients (if the expiry check happened
while they were receiving a request) or time them out too soon
(based on the connection time instead of the time of last request end).
Cuts something like 600 ms away from the initial TLS handshake,
as we kept having unflushed data that would require waiting for
the 200 ms TCP_CORK timer.
In particular, writing Content-Length instead of Content-length fixes a
problem where VLC's HTTP client would hang forever on our responses.
This makes HLS generally work in VLC, although it still starts playing
from the start instead of from the end.
This depends on the new Metacube PTS metadata block, which only
Nageru >= 1.7.2 serves at the moment. Lightly tested with iOS and hls.js;
does not work with VLC and mpv yet.
Do not serialize prebuffering_bytes in StreamProto.
There's no need to do this now that we can't have zombie streams anymore
(to be honest, the reason used to be rather thin already; it probably was
rather unintentional).
Automatically delete streams that are no longer in the configuration file.
Earlier, you had to mark this by setting src=delete, or the stream would linger
on in a sort of half-state; this was meant as a protection against configuration
messup. It has shown not to be that easy to mess this up in practice, so remove
it to make cleanup simpler.
Support Metacube metadata blocks, specifically timestamps.
Allows you to measure latency from encoder to reflector; specifically,
this is useful to figure out if you have a HTTP queue that keeps on
growing indefinitely.
Fix an issue where access.log would have the wrong timestamp.
The refactoring to use monotonic timestamps did not take into account
that the timestamps are used for the access log, so it'd contain
(roughly) time since boot instead of the actual time of day. Fixed by
measuring the offset between the two (although if the clock is wrong
at the time of logging, the connection time will be wrong -- as opposed
to if the clock is wrong at the time of connect, hard to say which is
the better). Reported by Joachim Tingvold.
While we are at it, change some of the timespec functions so that we
get slightly more precise timing in the logs (it could be about a second
off). For cosmetics only.
Add a simple HTTP endpoint that returns a very short string.
The intention is for clients to be able to probe the endpoint
to figure out which server is the fastest. To this end,
it supports CORS headers so that XHR is allowed to differentiate
servers that are down from servers that respond properly.