One would think something as mundane as setting a few uniforms wouldn't
really mean much for performance, but seemingly this is not always so --
I had a real-world shader that counted no less than 55 uniforms.
Of course, not all of these were actually used, but we still have to go
through looking up the name etc. for every single one, every single frame.
Thus, we introduce a new way of dealing with uniforms: Register them before
finalization time, and then EffectChain can store their numbers once and
for all, instead of this repeated lookup. The system is also set up such
that we can go to uniform buffer objects (UBOs) in the very near future.
It's a bit unfortunate that uniform declaration now is removed from the
.frag files, where it sat very nicely, but the alternative would be to
try to parse GLSL, which I'm a bit wary at right now. All effects are
converted, leaving the set_uniform_* functions without any users, but
they are kept around for now in case external effects want them.
This seems to bring 1–2% speedup for my use case; hopefully UBOs will
bring a tiny bit more.
Prepare for better understanding of 10- and 12-bit Y'CbCr.
Seemingly there is trickiness in how to interpret the integer
values that is different from what you'll typically see in R'G'B'
(or just GPUs and TV standards differ on that point as well).
Add an explanatory comment, and add a data member to YCbCrFormat
to prepare for correct 10/12-bit level handlings. We'll stay 8-bit
only for now, though, to avoid an API break for existing clients
for no good reason (there's no 10-bit input, really).
Minor optimization in ResampleEffect: Set less GL state.
In particular, if we can avoid it, use glTexSubImage2D instead of glTexImage2D.
This actually has a real effect, at least on Intel/Linux, where the drive seems
to stall on some mappings.
Of course, this only really helps for things like pans, not zooms.
Note that this is an API break; PaddingEffect now does something else
from what it used to do before when it comes to fractional offsets.
But I feel this is more useful; it allows PaddingEffect to be used
more efficiently for moving things smoothly around.
Also add a concept of border offset which moves the border around
without changing the pixels; useful if you want the subpixel placement
to be done by ResampleEffect (put the integral offset into top/left
and then move the border by the fractional amount it missed).
The assumption is broken whenever a non-integral top or left parameter
is specified. Instead, make an IntegralResampleEffect that enforces
these parameters to be integers, and then mark it as one-to-one sampling.
Collapse passes more aggressively in the face of size changes.
The motivating chain for this change was a case where we had
a SinglePassResampleEffect (the second half of a ResampleEffect)
feeding into a PaddingEffect, feeding into an OverlayEffect.
Currently, since the two former change output size, we'd bounce
to a temporary texture twice (output size changes would always
cause bounces).
However, this is needlessly conservative. The reason for bouncing
when changing output size is really if you want to get rid of
data by downscaling and then later upsampling, e.g. for a blur.
(It could also be useful for cropping, but we don't really use
that right now; PaddingEffect, which does crop, explicitly checks
the borders anyway to set the border color manually.) But in this case,
we are not downscaling at all, so we could just drop the bounce,
saving tons of texture bandwidth.
Thus, we add yet more parameters that effects can specify; first,
that an effect uses _one-to-one_ sampling; that is, that it
will only use its input as-is without sampling
between texels or outside the border (so the different
interpolation and border behavior will be irrelevant).
(Actually, almost all of our effects fall into this category.)
Second, a flag saying that even if an effect changes size,
it doesn't use virtual sizes (otherwise even a one-to-one effect
would de-facto be sampling between texels). If these flags
are set on the input and the output respectively, we can avoid
the bounce, at least unless there's an effect that's _not_
one-to-one further up the chain.
For my motivating case, this folded eight phases into four,
changing ~16.0 ms into ~10.6 ms rendering time. Seemingly
memory bandwidth is a really precious resource on my laptop's
GPU.
This is mostly theoretical; I've never been able to measure any
sort of real change from this. But according to popular cargo-culting,
it might have an effect since there are fewer edge pixels to shade.
Propagate size correctly across effects that change output size.
When propagating size information between effects in a phase,
we'd forget to check if the effect wanted to change size
and use that information instead of our own heuristics.
Fix that.
This is currently a no-op, since right now we always break a phase
when an effect changes output size, but there are very real situations
where we'd be fine with not doing so, so this patch paves the way
for that.
This is useful for debugging slow chains; it can give information
about which phase takes the most time. Right now there seems to be
~5 ms in one of my test chains that disappear into nothing
(ie. show up in the fps counter with vsync off, but not in any
phase), but hopefully we can eventually solve that discrepancy.
Use std::scientific when outputting floats, so we do not get issues with 0.0 being outputs as 0 (which is an int, which cannot always be implicitly converted to float in GLSL).
Really only FlatInput can easily support mipmaps; for things like YCbCrInput
that combine multiple inputs, it's hard (probably not downright impossible,
but at least not immediately obvious without thinking about it a bit) and for
FFTInput it makes no sense.
Thus, we allow an input to say that it can't do this, and then bounce it
to a texture if needed. Hopefully this should happen quite rarely.
This is long overdue, of course; I knew this function was a quick hack,
but didn't realize it was a problem until Christophe Thommeret reported
an issue that looked a lot like this.
Trying to use sprintf and floats right in a portable manner is seemingly
impossible (MinGW doesn't support the per-thread locale stuff), so simply
do it a different way; stop sprintf-ing floats and use std::stringstream
instead. I dislike the iostream interface a lot, but it can do per-stream
locales, which is exactly what we want here.
Dan Dennedy [Thu, 5 Mar 2015 07:41:39 +0000 (23:41 -0800)]
Fix build on OS X and MinGW.
OS X requires the xlocale.h header to define locale_t:
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/newlocale.3.html
MinGW does not include implementations for newlocale() and uselocale().
Instead, use the previous approach using setlocale().
setlocale() affects the whole process, not just the current thread
as I assumed; uselocale() (available since glibc 2.3, so basically
forever) is per-thread, and also conveniently seems to avoid the
issue of the returned pointer being destroyed (unless the driver
uses the return value of uselocale() as a base, which I really hope
it doesn't).
I'm slightly worried that since this overrides setlocale(), buggy drivers
might get confused when they try to do setlocale() and something else
overrides that precedence, but hopefully this shouldn't be a case.
Also add a unit test for locale handling while we're at it. It doesn't
test multi-threaded behavior, though, only the simple case.
For most users, this is mostly theoretical, as it requires compiling
with -march=native or similar. And these are definitely meant for
vectorizing, although it's still 2-3x as fast to use them as our own
software fallback.
These are supported starting from Haswell, and also by some AMD CPUs.
For the case where the resampling changed every frame (e.g. a zoom),
it just consumed too much CPU to be worth it, especially in memory
management; this is painful because it was an elegant solution to
a tricky problem, but it just has to go for now.
Also drop out to fp32 at the first sight of too-high error.
In ResampleEffect, optimize the bilinear weights on a global scale.
In addition to the individual weight optimization we do when combining samples,
this technique optimizes the weights as a whole, through some linear algebra.
This means it can take into account effects such as multiple bilinear samples
influencing the same coefficient (which normally should not happen, but might
nevertheless due to imprecisions in the stored texture coordinates), or
non-combined sample positions that can't hit the exact middle of the texel.
In practical tests, this is extremely effective; it often reduces the computed
sum of squared coefficient errors by as much as a factor 1000, although I
haven't verified how often it actually saves us from having to do fp32 fallback
with the rather tight error bounds that are in place.
When combining samples, take fp16 rounding into account.
This makes us somewhat more conservative in combining samples;
when we are near the lower/right edges of the image, we are starting
to get close to 1.0, and fp16 just doesn't have enough precision
to give us the 6 or 8 bits of subpixel precision we want (it is
hardly enough to address individual pixels!). In particular, this
can affect zooming with ResampleEffect, as reported by Christophe
Thommeret.
This does not fix all cases (especially not non-power-of-two cases);
for that, we will probably need to be able to fall back to fp32
when we detect fp16 doesn't work well.
We read about twice as many as we should have; the others were
probably just set to 0.0, which has no effect but still burns
arithmetic, unless your driver happens to optimize very aggressively
for this (which I don't think anyone does anymore).
Properly restore the LC_NUMERIC locale after finalizing.
There were two issues here:
1. setlocale(LC_NUMERIC, "C") always returns C, not the previous
locale.
2. The return value of setlocale() may point into static storage,
which may be corrupted when we call into libGL, if e.g.
the shader compiler calls setlocale() on its own.
1. If you're missing some functionality, Movit will now tell you
on stderr what you're missing. (We might suppress this later
if it turns out that people want to init_movit() but are actually
fine with it failing.)
2. Use a table instead of repeated if-then logic, since this started
to become a bit messy after we added OpenGL-version-equivalence
checks.
Same rationale as with the offset; we need resampling for proper zoom.
The look at heavy zoom isn't _quite_ what I had hoped for (although it's OK),
and there's a hint of shimmering in the zoom center if there's high-contrast
material there. For now, I'll write off the latter as Lanczos ringing;
I'll need to see what it does to video eventually (only tested with stills).
This enables smooth (subpixel) panning that people frequently want for stills
and titles, but that you couldn't do in a subpixel fashion before (PaddingEffect
could only do integer pixel offsets).
The placement (ResampleEffect) might seem a bit off at first, but subpixel
offset needs resampling, and ResampleEffect already has all the logic in place
for that. We could have used the GPU's built-in bilinear resampling, of course,
but it doesn't look all that good for high-contrast situations (although working
in linear light should help some).
This is mainly a convenience so that you can change e.g. a left-to-right
wipe into a right-to-left wipe without having to add a separate inverting
effect to the luma. Suggested by Dan Dennedy.
Make the ResourcePool hold FBOs as a per-context resource.
This is an attempt to get out of the FBO sharability mess (unfortunately
we can't just stop having persistent FBOs, due to NVidia performance).
We now require the client to tell us whenever a context is going away,
and we try to be more careful about not deleting them in the wrong context.
Also, we assumed FBO names were globally unique, which isn't necessarily
true, so re-key them.
For good measure, we were deleting FBOs off the freelist from the front,
not the back as we should have -- fixed.
We have a problem when trying to delete an EffectChain or ResourcePool;
we might have created FBOs or VAOs in the wrong context. Work around it
for now (unbreaking Kdenlive) by making VAOs non-persistent again,
and simply never deleting FBOs (leaking them).
A proper solution here will be hard, unfortunately, and will nede some thought.
Many of the rows in the support texture are exactly the same,
so don't store the duplicates; gives a small performance boost.
In a sense, this is exactly the same property that GPUwave uses
with drawing multiple quads at the lower level.
Stop the FFTPassEffect Repeat test after FFT size 128.
The reason is that the 256 test uses texture sizes of 256*31=7936,
and above ~3900, some cards (at least both my Intel and NVidia card)
start having accuracy issues on some sizes. The test happens not to
die on this for semi-obscure reasons, but that's mostly by accident,
and in any case, requiring 8k textures for a unit test might be
a bit on the upper side.
Redo FBO association yet again, this time per-texture.
According to http://adrienb.fr/blog/wp-content/uploads/2013/04/PortingSourceToLinux.pdf,
you want an FBO per-texture, not just format. And indeed, I can measure a very slight
performance improvement on both NVidia and ATI for this.
Seemingly this _also_ costs on NVidia; the demo app is down 0.9 ms/frame or so.
This rapidly started approaching complexity worthy of the ResourcePool,
so I moved the functionality in there even though it's not context-shareable.