Instead of checking that connect times are monotonic, explicitly make them
so if they're not. This seems safest, in case the monotonic clock goes
backwards a small bit (e.g. when changing CPUs). We don't need it for the
serialized case since we explicitly sort those by time; the assert can stay.
Time out clients still in READING_REQUEST after 60 seconds.
Seemingly there are some of these around, and I've seen them eat up
fds in a long-running server. There's some pain in sorting the clients
on deserialization, but apart from that, this ended up being relatively
pain-free and should be efficient enough.
Change the connected time from time_t to timespec.
The primary gain as the code stands is that we become immune to
issues with the clock going backwards etc. in logs (since we can
now use a monotonic timer).
However, the motivating change is that we will soon be implementing request
read timeouts. At that point, not only will the clock data be much more
important to get right, but it will also be nice to have more fine-grained
timestamps to be able to locate clients semi-uniquely in a sorted list.
The motivation is jwPlayer, which for HTTP files expects to be able
to do no prebuffering and just download full speed nevertheless
(as it assumes they are static files, not streams) -- when it cannot,
it shows an ugly icon on top of the stream all the time. So we add
an option for forced prebuffering (three seconds seems to be about
fine) which means we wait sending until we have a pretty big backlog.
Ideally, we would be able to actually send old data instead of just
waiting (which would mean that the client doesn't need the extra wait
at the beginning), but it's complicated with having to remember old
keyframe positions, changed stream headers etc.
This would only happen if a HTTP input that wasn't fully setup yet
was no longer in use after an reload. This would normally manifest itself as
a “close(): Bad file descriptor”, but could also end up closing an arbitrary
descriptor, causing various sorts of havoc.
Seemingly open() needs to take a pathname only, not a full filename.
This made us _always_ go into the mkstemp() path, which was of course
not the intention.
This has been fully obsoleted by the fq qdisc, which is easier to set up,
more scalable and does not require root privileges. Removing it also removes
a significant chunk of code, which is good.
The original plan here was to let the client hang until we had
some headers to send (ie., first send an empty header and then
0 bytes data, and then send the client back from SENDING_DATA
to SENDING_HEADER as we got data).
However, as time went by, we started inserting stuff in the middle of the
headers ourselves, resulting in us sending pretty much a junk header.
Worse, this would be sent on to other relays, corrupting the version of
the Metacube stream.
Simply return 503 Not Available now instead if the stream is still
starting up; it's pretty much as good, and has fewer edge cases
to worry about.
Update the VLC Metacube patch to apply against current VLC master.
This version is identical to the one that was sent to the VLC mailing list
for upstream inclusion (but not accepted), so it also contains some other
cleanups.
SO_MAX_PACING_RATE is the newfangled socket option from Eric Dumazet,
used with the new fq packet scheduler in Linux. It allows you to set
a max rate for the socket (presumably a stricter upper bound than the
RTT-based estimate from the kernel), delivering pacing without having
to resort to the relatively complex mark setup. It seems to enter
the Linux kernel in 3.13 at the earliest; not unlikely even later.
In time, fwmark will be deprecated, but the implementation of TCP pacing in
Linux is still a bit shaky (especially with not-always-filling applications
like streaming), so fwmark will stay the primary solution for now.
Philipp Kern [Sun, 8 Sep 2013 21:10:55 +0000 (23:10 +0200)]
Makefile: Implement sysconfdir and localstatedir.
It is a bit odd to use $(PREFIX) for some, but not all directories. Make
this a useable middle ground akin autoconf, which allows to pass in
different paths for /var (--localstatedir) and /etc (--sysconfdir).
Seemingly holding queued_data_mutex over add_data_raw(), which does writev(),
could be slow on systems where /tmp is not on tmpfs, causing the queued_data_mutex
to be held for so long (up to a second has been observed) that the input thread
couldn't keep up.
To fix this, we move queued_data_mutex into a per-stream variable (not sure
if it's ideal, but it was the simplest way to avoid ugliness), and then hold it
for as short as possible in process_queued_data().
While we're at it, document that queued_data_last_starting_point has the same
locking rules as queued_data.
Compared to the old version, this fixes many small deficiencies:
- The header is smaller; 16 instead of 24 bytes. With TS, Metacube actually
yields noticeable overhead (13% or so), so reducing it by 1/3 is good.
- The sync no longer overlaps with itself; both starting and ending with
the same sounds bad if we should ever drop bytes in the middle.
In fact, it no longer has any repeated characters.
- The header is checksummed, to avoid cases where a corrupted header
could cause us to pull in gigabytes of data. (The data is not.)
vlc-metacube.diff has been updated, although it includes another patch
(part of the WebM patch set) since that's what my VLC works from at the moment.