I\'m looking to parallelize some complex math, and webgl looks like the perfect way to do it. The problem is, you can only read 8 bit integers from textures. I would ideally
Everyone is absolutely correct in how to handle something like this in WebGl, but I wanted to share a trick for getting the values in and out.
Assuming you want to do some comparison on two values that fit in 16 bits:
// Generate a list of random 16bit integers
let data16bit = new Uint16Array(1000000);
for(let i=0; i < data16bit.length; i+=2){
data16bit[i] = Math.random()*(2**16);
data16bit[i+1] = Math.random()*(2**16);
}
// Read them one byte at a time, for writing to
// WebGL
let texture = new Uint8Array(data16bit.buffer);
Now when you get your values in your fragment shader, you can pick up the numbers for manipulation:
vec4 here = texture2D(u_image, v_texCoord);
// Read the "red" byte and the "green" byte together (as a single thing)
// as well as the "blue" byte and the "alpha" byte together as a single
// thing
vec2 a = here.rg;
vec2 b = here.ba;
// now compare the things
if(a == b){
here.a = 1;
}
else{
here.a = 0;
}
// return the boolean value
gl_FragColor = here;
The point is just a reminder that you can treat the same block of JavaScript memory as different sizes: Uint16Array
, and Uint8Array
(rather than trying to do bit shifting and breaking it up).
In response to requests for more detail, that code is very close to a cut/paste directly from this code and explanation.
The exact use of this can be found on the corresponding samples on GitLab (two parts of the same file)
In general, if you want to pack the significant digits of a floating-point number in bytes, you have consecutively to extract 8 bits packages of the significant digits and store it in a byte.
In order to pack a floating-point value in 4 * 8-bit buffers, the range of the source values must first be specified.
If you have defined a value range [minVal
, maxVal
], it has to be mapped to the range [0.0, 1.0]:
float mapVal = clamp((value-minVal)/(maxVal-minVal), 0.0, 1.0);
The function Encode
packs a floating point value in the range [0.0, 1.0] into a vec4
:
vec4 Encode( in float value )
{
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}
The function Decode
extracts a floating point value in the range [0.0, 1.0] from a vec4
:
float Decode( in vec4 pack )
{
float value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return value * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
The following functions packs and extracts an floating point value in and from the range [minVal
, maxVal
]:
vec4 EncodeRange( in float value, flaot minVal, maxVal )
{
value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}
float DecodeRange( in vec4 pack, flaot minVal, maxVal )
{
value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
value *= (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
return mix( minVal, maxVal, value );
}
Another possibility is to encode the the significant digits to 3 * 8-bits of the RGB values and the exponent to the 8-bits of the alpha channel:
vec4 EncodeExp( in float value )
{
int exponent = int( log2( abs( value ) ) + 1.0 );
value /= exp2( float( exponent ) );
value = (value + 1.0) * (256.0*256.0*256.0 - 1.0) / (2.0*256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}
float DecodeExp( in vec4 pack )
{
int exponent = int( pack.w * 256.0 - 127.0 );
float value = dot( pack.xyz, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
value = value * (2.0*256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0) - 1.0;
return value * exp2( float(exponent) );
}
Note, since a standard 32-bit IEEE 754 number has only 24 significant digits, it is completely sufficient to encode the number in 3 bytes.
See also How do I convert between float and vec4,vec3,vec2?
You can bitshift by multiplying/dividing by powers of two.
As pointed out in the comments the approach I originally posted was working but incorrect, here's one by Aras Pranckevičius, note that the source code in the post itself contains a typo and is HLSL, this is a GLSL port with the typo corrected:
const vec4 bitEnc = vec4(1.,255.,65025.,16581375.);
const vec4 bitDec = 1./bitEnc;
vec4 EncodeFloatRGBA (float v) {
vec4 enc = bitEnc * v;
enc = fract(enc);
enc -= enc.yzww * vec2(1./255., 0.).xxxy;
return enc;
}
float DecodeFloatRGBA (vec4 v) {
return dot(v, bitDec);
}