How do I convert a vec4 rgba value to a float?

后端 未结 5 1751
遇见更好的自我
遇见更好的自我 2020-12-03 15:29

I packed some float data in a texture as an unsigned_byte, my only option in webgl. Now I would like unpack it in the vertex shader. When I sample a pixel I get a vec4 whi

5条回答
  •  谎友^
    谎友^ (楼主)
    2020-12-03 16:06

    Twerdster posted some excellent code in his answer. So all credit go to him. I post this new answer, since comments don't allow for nice syntax colored code blocks, and i wanted to share some code. But if you like the code, please upvote Twerdster original answer.

    In Twerdster previous post he mentioned that the decode and encode might not work for all values.

    To further test this, and validate the result i made a java program. While porting the code i tried to stayed as close as possible to the shader code (therefore i implemented some helper functions). Note: I also use a store/load function to similate what happens when you write/read from a texture.

    I found out that:

    1. You need a special case for the zero
    2. You might also need special case for infinity, but i did not implement that to keep the shader simple (eg: faster)
    3. Because of rounding errors sometimes the result was wrong therefore:
      • subtract 1 from exponent when because of rounding the mantissa is not properly normalised (eg mantissa < 1)
      • Change float Mantissa = (exp2(- Exponent) * F); to float Mantissa = F/exp2(Exponent); to reduce precision errors
      • Use float Exponent = floor(log2(F)); to calc exponent. (simplified by new mantissa check)

    Using these small modifications i got equal output on almost all inputs, and got only small errors between the original and encoded/decoded value when things do go wrong, while in Twerdster's original implementation rounding errors often resulted in the wrong exponent (thus the result being off by factor two).

    Please note that this is a Java test application which i wrote to test the algorithm. I hope this will also work when ported to the GPU. If anybody tries to run it on a GPU, please leave a comment with your experience.

    And for the code with a simple test to try different numbers until it failes.

    import java.io.PrintStream;
    import java.util.Random;
    
    public class BitPacking {
    
        public static float decode32(float[] v)
        {
            float[] rgba = mult(255, v);
            float sign = 1.0f - step(128.0f,rgba[0])*2.0f;
            float exponent = 2.0f * mod(rgba[0],128.0f) + step(128.0f,rgba[1]) - 127.0f;    
            if(exponent==-127)
                return 0;           
            float mantissa = mod(rgba[1],128.0f)*65536.0f + rgba[2]*256.0f +rgba[3] + ((float)0x800000);
            return sign * exp2(exponent-23.0f) * mantissa ;     
        }   
    
        public static float[] encode32(float f) {           
            float F = abs(f); 
            if(F==0){
                return new float[]{0,0,0,0};
            }
            float Sign = step(0.0f,-f);
            float Exponent = floor(log2(F)); 
    
            float Mantissa = F/exp2(Exponent); 
    
            if(Mantissa < 1)
                Exponent -= 1;      
    
            Exponent +=  127;
    
            float[] rgba = new float[4];
            rgba[0] = 128.0f * Sign  + floor(Exponent*exp2(-1.0f));
            rgba[1] = 128.0f * mod(Exponent,2.0f) + mod(floor(Mantissa*128.0f),128.0f);  
            rgba[2] = floor(mod(floor(Mantissa*exp2(23.0f -8.0f)),exp2(8.0f)));
            rgba[3] = floor(exp2(23.0f)*mod(Mantissa,exp2(-15.0f)));
            return mult(1/255.0f, rgba);
        }
    
        //shader build-in's
    
        public static float exp2(float x){
            return (float) Math.pow(2, x);
        }
    
        public static float[] step(float edge, float[] x){
            float[] result = new float[x.length];
            for(int i=0; i

提交回复
热议问题