There are two primary reasons for this:
- GL_FLOAT FlatInput is primarily used for tests, and even more importantly,
mostly accuracy tests. ATI's drivers appear to round off fp32 -> fp16
wrong (truncate instead of round), which breaks some of these tests.
- In case someone _would_ use GL_FLOAT inputs, they'd probably be updated
every frame anyway, so the fp32 -> fp16 conversion step (probably on CPU)
will negate any performance benefits by fp16 sampling anyway.
// Translate the input format to OpenGL's enums.
GLenum internal_format;
if (type == GL_FLOAT) {
// Translate the input format to OpenGL's enums.
GLenum internal_format;
if (type == GL_FLOAT) {
- internal_format = GL_RGBA16F_ARB;
+ internal_format = GL_RGBA32F_ARB;
} else if (output_linear_gamma) {
assert(type == GL_UNSIGNED_BYTE);
internal_format = GL_SRGB8_ALPHA8;
} else if (output_linear_gamma) {
assert(type == GL_UNSIGNED_BYTE);
internal_format = GL_SRGB8_ALPHA8;