How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?

眉间皱痕 提交于 2019-12-28 01:56:14

问题


I'm looking to parallelize some complex math, and webgl looks like the perfect way to do it. The problem is, you can only read 8 bit integers from textures. I would ideally like to get 32 bit numbers out of the texture. I had the idea of using the 4 color channels to get 32 bits per pixel, instead of 4 times 8 bits.

My problem is, glsl doesn't have a "%" operator or any bitwise operator!

TLDR: How do I convert a 32bit number to 4 8bit numbers by using the operators in glsl.

Some extra info on the technique (using bitwise operators):

How to store a 64 bit integer in two 32 bit integers and convert back again


回答1:


You can bitshift by multiplying/dividing by powers of two.

As pointed out in the comments the approach I originally posted was working but incorrect, here's one by Aras Pranckevičius, note that the source code in the post itself contains a typo and is HLSL, this is a GLSL port with the typo corrected:

const vec4 bitEnc = vec4(1.,255.,65025.,16581375.);
const vec4 bitDec = 1./bitEnc;
vec4 EncodeFloatRGBA (float v) {
    vec4 enc = bitEnc * v;
    enc = fract(enc);
    enc -= enc.yzww * vec2(1./255., 0.).xxxy;
    return enc;
}
float DecodeFloatRGBA (vec4 v) {
    return dot(v, bitDec);
}



回答2:


In general, if you want to pack the significant digits of a floating-point number in bytes, you have consecutively to extract 8 bits packages of the significant digits and store it in a byte.

Encode a floating point number in a predefined range

In order to pack a floating-point value in 4 * 8-bit buffers, the range of the source values must first be specified.
If you have defined a value range [minVal, maxVal], it has to be mapped to the range [0.0, 1.0]:

float mapVal = clamp((value-minVal)/(maxVal-minVal), 0.0, 1.0);

The function Encode packs a floating point value in the range [0.0, 1.0] into a vec4:

vec4 Encode( in float value )
{
    value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
    vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
    return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}

The function Decode extracts a floating point value in the range [0.0, 1.0] from a vec4:

float Decode( in vec4 pack )
{
    float value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
    return value * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}

The following functions packs and extracts an floating point value in and from the range [minVal, maxVal]:

vec4 EncodeRange( in float value, flaot minVal, maxVal )
{
    value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
    value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
    vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
    return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}

float DecodeRange( in vec4 pack, flaot minVal, maxVal )
{
    value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
    value *= (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
    return mix( minVal, maxVal, value );
}

Encode a floating point number with an exponent

Another possibility is to encode the the significant digits to 3 * 8-bits of the RGB values and the exponent to the 8-bits of the alpha channel:

vec4 EncodeExp( in float value )
{
    int exponent  = int( log2( abs( value ) ) + 1.0 );
    value        /= exp2( float( exponent ) );
    value         = (value + 1.0) * (256.0*256.0*256.0 - 1.0) / (2.0*256.0*256.0*256.0);
    vec4 encode   = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
    return vec4( encode.xyz - encode.yzw / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}

float DecodeExp( in vec4 pack )
{
    int exponent = int( pack.w * 256.0 - 127.0 );
    float value  = dot( pack.xyz, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
    value        = value * (2.0*256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0) - 1.0;
    return value * exp2( float(exponent) );
}

Note, since a standard 32-bit IEEE 754 number has only 24 significant digits, it is completely sufficient to encode the number in 3 bytes.

See also the answers to teh following questions:

  • How do I convert between float and vec4,vec3,vec2?
  • OpenGL ES write depth data to color
  • Encode floating point grayscale in RGB color within an H264 and reread depth value in unity
  • convert color to world space coords (GLSL ES):



回答3:


Everyone is absolutely correct in how to handle something like this in WebGl, but I wanted to share a trick for getting the values in and out.

Assuming you want to do some comparison on two values that fit in 16 bits:

let data16bit = new Uint16Array(1000000);
for(let i=0; i < data16bit.length; i+=2){
    data16[i]   = Math.random()*(2**16);
    data16[i+1] = Math.random()*(2**16);
}
let texture = new Uint8Array(data16bit.buffer);

Now when you get your values in your fragment shader, you can pick up the numbers for manipulation:

vec4 here = texture2D(u_image, v_texCoord);
vec2 a = here.rg;
vec2 b = here.ba;
if(a == b){
    here.a = 1;
}
else{
    here.a = 0;
}
gl_FragColor = here;

The point is just a reminder that you can treat your javascript memory as different sizes (rather than trying to do bit shifting and breaking it up).



来源:https://stackoverflow.com/questions/18453302/how-do-you-pack-one-32bit-int-into-4-8bit-ints-in-glsl-webgl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!