Sign extension from 16 to 32 bits in C

心已入冬 提交于 2019-12-07 03:46:09

问题


I have to do a sign extension for a 16-bit integer and for some reason, it seems not to be working properly. Could anyone please tell me where the bug is in the code? I've been working on it for hours.

int signExtension(int instr) {
    int value = (0x0000FFFF & instr);
    int mask = 0x00008000;
    int sign = (mask & instr) >> 15;
    if (sign == 1)
        value += 0xFFFF0000;
    return value;
}

The instruction (instr) is 32 bits and inside it I have a 16bit number.


回答1:


Try:

int signExtension(int instr) {
    int value = (0x0000FFFF & instr);
    int mask = 0x00008000;
    if (mask & instr) {
        value += 0xFFFF0000;
    }
    return value;
}



回答2:


Why is wrong with:

int16_t s = -890;
int32_t i = s;  //this does the job, doesn't it?



回答3:


what's wrong in using the builtin types?

int32_t signExtension(int32_t instr) {
    int16_t value = (int16_t)instr;
    return (int32_t)value;
}

or better yet (this might generate a warning if passed a int32_t)

int32_t signExtension(int16_t instr) {
    return (int32_t)instr;
}

or, for all that matters, replace signExtension(value) with ((int32_t)(int16_t)value)

you obviously need to include <stdint.h> for the int16_t and int32_t data types.




回答4:


Just bumped into this looking for something else, maybe a bit late, but maybe it'll be useful for someone else. AFAIAC all C programmers should start off programming assembler.

Anyway sign extending is much easier than the proposals. Just make sure you are using signed variables and then use 2 shifts.

long value;   // 32 bit storage
value=0xffff; // 16 bit 2's complement -1, value is now 0x0000ffff
value = ((value << 16) >> 16); // value is now 0xffffffff

If the variable is signed then the C compiler translates >> to Arithmetic Shift Right which preserves sign. This behaviour is platform independent.

So, assuming that value starts of with 0x1ff then we have, << 16 will SL (Shift Left) the value so instr is now 0xff80, then >> 16 will ASR the value so instr is now 0xffff.

If you really want to have fun with macros then try something like this (syntax works in GCC haven't tried in MSVC).

#include <stdio.h>

#define INT8 signed char
#define INT16 signed short
#define INT32 signed long
#define INT64 signed long long
#define SIGN_EXTEND(to, from, value) ((INT##to)((INT##to)(((INT##to)value) << (to - from)) >> (to - from)))

int main(int argc, char *argv[], char *envp[])
{
    INT16 value16 = 0x10f;
    INT32 value32 = 0x10f;
    printf("SIGN_EXTEND(8,3,6)=%i\n", SIGN_EXTEND(8,3,6));
    printf("LITERAL         SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,0x10f));
    printf("16 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,value16));
    printf("32 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=%i\n", SIGN_EXTEND(16,9,value32));

    return 0;
}

This produces the following output:

SIGN_EXTEND(8,3,6)=-2
LITERAL         SIGN_EXTEND(16,9,0x10f)=-241
16 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=-241
32 BIT VARIABLE SIGN_EXTEND(16,9,0x10f)=-241



回答5:


People pointed out casting and a left shift followed by an arithmetic right shift. Another way that requires no branching:

(0xffff & n ^ 0x8000) - 0x8000

If the upper 16 bits are already zeroes:

(n ^ 0x8000) - 0x8000

• Community wiki as it's an idea from "The Aggregate Magic Algorithms, Sign Extension"



来源:https://stackoverflow.com/questions/6215256/sign-extension-from-16-to-32-bits-in-c

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!