a gigabyte of memory, still only a couple of megabytes would be used at most,
as cache. If you ran out of memory, this could actually shrink to zero. It's
just that the kernel tries to be more effective (and it succeeds) by reducing
-the number of reads, and instead reading larger chunks at once.
-
-If sendfile() support is enabled, and the circumstances allow it (binary mode
-downloading), BetaFTPD will not mmap() at all, bringing the memory total down
-to a more realistic value.
+the number of reads, and instead reading larger chunks at once. Actually, for
+32-bit architectures, if you serve large several large files, you might get a
+problem with hitting the 2GB address space (every mmap() counts towards this
+limit, it appears). The solutions are many: Use sendfile() (see below), do
+without mmap, or enable high memory support in your kernel (at least Linux
+2.3/2.4 can do this at compile time). For most of us, though, this will never be
+a problem, just be aware of it if you're doing a benchmark, for instance. Future
+versions of BetaFTPD might just mmap() once per file (instead of once per
+transfer per file), but this is probably more problems than it's worth.
+
+If sendfile() support is enabled (note that only Linux sendfile() is working at
+the moment, BSD sendfile() is detected but not utilized), and the circumstances
+allow it (binary mode downloading), BetaFTPD will not mmap() at all, bringing
+the memory total down to a more realistic value.
Bragging: