4 The motion search is one of the two major components of DIS. It works more or less
5 like you'd expect; there's a bunch of overlapping patches (8x8 or 12x12 pixels) in
6 a grid, and for each patch, there's a search to try to find the most similar patch
9 Unlike in a typical video codec, the DIS patch search is based on gradient descent;
10 conceptually, you start with an initial guess (the value from the previous level,
11 or the zero flow for the very first level), subtract the reference (“template”)
12 patch from the candidate, look at the gradient to see in what direction there is
13 a lower difference, and then inch a bit toward that direction. (There is seemingly
14 nothing like AdaM, Momentum or similar, but the searched value is only in two
15 dimensions, so perhaps it doesn't matter as much then.)
17 DIS does a tweak to this concept. Since the procedure as outlined above requires
18 computing the gradient of the candidate patch, it uses the reference patch as
19 candidate (thus the “inverse” name), and thus uses _its_ gradient to understand
20 in which direction to move. (This is a bit dodgy, but not _that_ dodgy; after
21 all, the two patches are supposed to be quite similar, so their surroundings and
22 thus also gradients should also be quite similar.) It's not entirely clear whether
23 this is still a win on GPU, where calculations are much cheaper, especially
24 the way we parallelize the search, but we've kept it around for now.
26 The inverse search is explained and derived in the supplementary material of the
27 paper, section A. Do note that there's a typo; the text under equation 9 claims
28 that the matrix H is n x n (where presumably n is the patch size), while in reality,
31 Our GPU parallellization is fairly dumb right now; we do one patch per fragment
32 (ie., parallellize only over patches, not within each patch), which may not
33 be optimal. In particular, in the initial level, we only have 40 patches,
34 which is on the low side for a GPU, and the memory access patterns may also not
38 const uint patch_size = 12;
39 const uint num_iterations = 16;
45 uniform sampler2D flow_tex, grad0_tex, image0_tex, image1_tex;
46 uniform vec2 image_size, inv_image_size, inv_prev_level_size;
50 // Lock the patch center to an integer, so that we never get
51 // any bilinear artifacts for the gradient. (NOTE: This assumes an
52 // even patch size.) Then calculate the bottom-left texel of the patch.
53 vec2 base = (round(patch_center * image_size) - (0.5f * patch_size - 0.5f))
56 // First, precompute the pseudo-Hessian for the template patch.
57 // This is the part where we really save by the inverse search
58 // (ie., we can compute it up-front instead of anew for each
63 // where S is the gradient at each point in the patch. Note that
64 // this is an outer product, so we get a (symmetric) 2x2 matrix,
67 vec2 grad_sum = vec2(0.0f); // Used for patch normalization.
68 float template_sum = 0.0f;
69 for (uint y = 0; y < patch_size; ++y) {
70 for (uint x = 0; x < patch_size; ++x) {
71 vec2 tc = base + uvec2(x, y) * inv_image_size;
72 vec2 grad = texture(grad0_tex, tc).xy;
73 H[0][0] += grad.x * grad.x;
74 H[1][1] += grad.y * grad.y;
75 H[0][1] += grad.x * grad.y;
77 template_sum += texture(image0_tex, tc).x;
83 // Make sure we don't get a singular matrix even if e.g. the picture is
84 // all black. (The paper doesn't mention this, but the reference code
85 // does it, and it seems like a reasonable hack to avoid NaNs. With such
86 // a H, we'll go out-of-bounds pretty soon, though.)
87 if (determinant(H) < 1e-6) {
92 mat2 H_inv = inverse(H);
94 // Fetch the initial guess for the flow, and convert from the previous size to this one.
95 vec2 initial_u = texture(flow_tex, flow_tc).xy * (image_size * inv_prev_level_size);
97 float mean_diff, first_mean_diff;
99 for (uint i = 0; i < num_iterations; ++i) {
100 vec2 du = vec2(0.0, 0.0);
101 float warped_sum = 0.0f;
102 vec2 u_norm = u * inv_image_size; // In [0..1] coordinates instead of pixels.
103 for (uint y = 0; y < patch_size; ++y) {
104 for (uint x = 0; x < patch_size; ++x) {
105 vec2 tc = base + uvec2(x, y) * inv_image_size;
106 vec2 grad = texture(grad0_tex, tc).xy;
107 float t = texture(image0_tex, tc).x;
108 float warped = texture(image1_tex, tc + u_norm).x;
109 du += grad * (warped - t);
110 warped_sum += warped;
114 // Subtract the mean for patch normalization. We've done our
115 // sums without subtracting the means (because we didn't know them
118 // sum(S^T * ((x + µ1) - (y + µ2))) = sum(S^T * (x - y)) + (µ1 – µ2) sum(S^T)
120 // which gives trivially
122 // sum(S^T * (x - y)) = [what we calculated] - (µ1 - µ2) sum(S^T)
124 // so we can just subtract away the mean difference here.
125 mean_diff = (warped_sum - template_sum) * (1.0 / (patch_size * patch_size));
126 du -= grad_sum * mean_diff;
129 first_mean_diff = mean_diff;
132 // Do the actual update.
136 // Reject if we moved too far. Note that the paper says “too far” is the
137 // patch size, but the DIS code uses half of a patch size. The latter seems
138 // to give much better overall results.
140 // Also reject if the patch goes out-of-bounds (the paper does not mention this,
141 // but the code does, and it seems to be critical to avoid really bad behavior
143 vec2 patch_center = (base * image_size - 0.5f) + patch_size * 0.5f + u;
144 if (length(u - initial_u) > (patch_size * 0.5f) ||
145 patch_center.x < -(patch_size * 0.5f) ||
146 image_size.x - patch_center.x < -(patch_size * 0.5f) ||
147 patch_center.y < -(patch_size * 0.5f) ||
148 image_size.y - patch_center.y < -(patch_size * 0.5f)) {
150 mean_diff = first_mean_diff;
153 // NOTE: The mean patch diff will be for the second-to-last patch,
154 // not the true position of du. But hopefully, it will be very close.
156 out_flow = vec3(u.x, u.y, mean_diff);