We have a problem when trying to delete an EffectChain or ResourcePool;
we might have created FBOs or VAOs in the wrong context. Work around it
for now (unbreaking Kdenlive) by making VAOs non-persistent again,
and simply never deleting FBOs (leaking them).
A proper solution here will be hard, unfortunately, and will nede some thought.
Many of the rows in the support texture are exactly the same,
so don't store the duplicates; gives a small performance boost.
In a sense, this is exactly the same property that GPUwave uses
with drawing multiple quads at the lower level.
Stop the FFTPassEffect Repeat test after FFT size 128.
The reason is that the 256 test uses texture sizes of 256*31=7936,
and above ~3900, some cards (at least both my Intel and NVidia card)
start having accuracy issues on some sizes. The test happens not to
die on this for semi-obscure reasons, but that's mostly by accident,
and in any case, requiring 8k textures for a unit test might be
a bit on the upper side.
Redo FBO association yet again, this time per-texture.
According to http://adrienb.fr/blog/wp-content/uploads/2013/04/PortingSourceToLinux.pdf,
you want an FBO per-texture, not just format. And indeed, I can measure a very slight
performance improvement on both NVidia and ATI for this.
Seemingly this _also_ costs on NVidia; the demo app is down 0.9 ms/frame or so.
This rapidly started approaching complexity worthy of the ResourcePool,
so I moved the functionality in there even though it's not context-shareable.
Reduce the amount of arithmetic in the BlurEffect shader a bit.
We did additions and subtractions with zero, which is sort of a waste
on scalar architectures. Helps ever so slightly on the demo app on my NVidia
card (3–4%).
Seemingly creating and deleting them is crazy expensive on NVidia
(~3 ms for a create/delete pair), so 6dea8d2 caused a performance
regression at high frame rates. Now we instead keep one around per
context (they cannot be shared), which brings us basically back
to where we were performance-wise.
Make Phase take other Phases as inputs, not Nodes.
This was a refactoring I wanted to do for a while, but actually finding
the right structure was a bit tricky. In the process, the entire phase
generation logic was rewritten, but the separation between compilation
and Phase construction is much cleaner now, and the logic in general
is easier to follow with more use of explicit recursion.
I'm still not 100% happy about what might be overuse of output_node;
we still need to link Phase and Node (the link just goes the other way
now), but I'm not sure we need to use it in all the cases we currently do.
A lot of the later commits have been leading up to this, and I finally
got to the point where all the unit tests check out, everything seems
to work (modulo maybe some overflow issues) and we have a model that
matches what people actually expects from convolutions.
Note that this adds a dependency on FFTW3; we could probably have added
our own routines for such small needs, but like with Eigen, calling out to a
library is fine as long as it's of good quality (which FFTW certainly is) and
is widely available.
Revert "Support pad/crop from bottom, not just from the top."
This turned out not to be so useful after all, as we'd like a more
consistent top-left coordinate system, and changes to do that will
obsolete this patch.
Fix a bug where repeated vertical FFTs would reverse the output.
Unfortunately, the tests didn't catch this, as the Repeat test used
an even number of passes (being of size 64), which reversed things
back into place. It now tries a wider range of sizes to make sure
everything is okay.
This tests a few edge cases that are not adequately covered by the
random fp32 tests; in particular, the round-to-even logic had
no test coverage, which is bad.
Formalize the notion of messing with sampler state.
This kills a lot of the assumptions that have been going around,
and should allow us to deal much better with the situation when
we have two or more inputs to an effect (where you basically can't
predict the sampler number used reliably); there's still an edge
case that's documented with a TODO, but this is generally much better.
This allows us to ignore the texture bounce flag when reading from a
FlatInput, and also handles better the case where an YCbCrInput is read
from multiple times (it's now bounced, which should be better for speed,
I think).
The main motivation, however, is to be able to control sampler state
a bit less hackish in the future.
This not only fixes issues with poor downconversion on ATI, but also
allows us to normalize while being aware of fp16 roundoff issues.
Seems to about cut the error in half in the HeavyResampleGetsSumRight
test, which as far as I can see would take us up to 10-bit accuracy.
Use the GL_RED texture format instead of GL_LUMINANCE.
Seemingly GL_LUMINANCE is also deprecated; this actually decreases
support for GLES2 somewhat, but we need GLES3 anyway, so the net
loss shouldn't be too bad.
This is a pretty hard API break, but it's probably the last big API
break before 1.0, and some of the names (e.g. Effect, Input ResourcePool)
are really so generic that they should not be allowed to pollute the global
namespace.
First, make sure we test one individual pass, and that we test it in
fp32. Second, set a limit that's actually grounded in something real,
not just a pretty power of 10.
Normalize the resample weight after bilinear combining.
We introduce a small bit of error in the combining (due to having to
compensate for lack of subpixel sampling precision), so normalize
after it rather than before it. Also, do a second normalization pass,
which seemingly helps sometimes (probably due to inaccuracies in the
float sum).
This seems to kill about half the precision loss on Intel, at least.
Rescale resampling weights so that the sum becomes one.
For some reason, I had forgotten this, and it showed up because Qt
has buggy handling of pixels with alpha != 0xff. Add unit test
so it doesn't happen again.
I'm a bit concerned that rounding might cause problems so that we
should perhaps renormalize after the bilinear conversion, but we
can deal with that later if it should show up.