C: avoiding overflows when working with big numbers

醉酒当歌 提交于 2019-11-28 19:34:00

This is compiler-specific, but if you're using gcc then you can compile with -ftrapv to issue SIGABRT when signed integral overflow occurs.

For example:

/* compile with gcc -ftrapv <filename> */
#include <signal.h>
#include <stdio.h>
#include <limits.h>

void signalHandler(int sig) {
  printf("Overflow detected\n");
}

int main() {
  signal(SIGABRT, &signalHandler);

  int largeInt = INT_MAX;
  int normalInt = 42;
  int overflowInt = largeInt + normalInt;  /* should cause overflow */

  /* if compiling with -ftrapv, we shouldn't get here */
  return 0;
}

When I run this code locally, the output is

Overflow detected
Aborted
Matt Joiner

Take a look at -ftrapv and -fwrapv:

-ftrapv

This option generates traps for signed overflow on addition, subtraction, multiplication operations.

-fwrapv

This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables other. This option is enabled by default for the Java front-end, as required by the Java language specification.

See also Integer overflow in C: standards and compilers, and Useful GCC flags for C.

If you are using Microsoft's compiler, there are options to generate code that triggers a SEH exception when an integer conversion cuts off non-zero bits. In places where this is actually desired, use a bitwise AND to remove the upper bits before doing the conversion.

clang now support dynamic overflow checks for both signed and unsigned integers. See -fsanitize=integer switch. For now it is only one C++ compiler with fully supported dynamic overflow checking for debug purpose.

The only sure fire way is to wrap operations on those integers into functions that perform bounds violation checking. This will of course slow down integer ops, but if your code asserts or halts on a boundary violation with a meaningful error message, that will go a long way towards helping you identify where the problem is.

As for your particular issue, keep in mind that general case sorting is O(nlogn), so the reason the algorithm is taking much longer could be due to the fact that the increase in time is not linear with respect to the data set size. Since also didn't mention how much physical memory is in the box and how much of it is used for your algorithm, there could possibly be page faulting to disk with the larger data set, thus potentially slowing things to a crawl.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!