Is it possible in C to always make a segfault at 1 over the array size?

别等时光非礼了梦想. 提交于 2019-12-11 14:13:24

问题


Suppose you allocate some array arr size n as follows:

int arr[n]; // Allocated fine
printf("%d", arr[n]); // Segfault if possible

Is there such a number n that exists whereby I can always trigger a segfault on the printf line? This could be specific to some OS.

I know its undefined behavior, and I know when accessing it and changing it out of bounds will affect another area of memory that will (likely) cause me major problems later on.

My professor said that it will not always segfault, and I'm curious if there's anyway to create an array of some size in some situation with some type of OS or computer that will reliably segfault every time.

Is this possible or no? Is there some condition I can create that will result in a single out of bound access to always trigger a segfault.

Is it theoretically possible to be always true? But just won't happen in practice all the time?


回答1:


In the general case, as Ben notes, it is undefined behaviour. The general answer is "don't rely on undefined behaviour ever, and it's effects are never deterministic".

There are, however, two sure fire ways to cause this on specific, modern, run-of-the-mill systems, which covers a large cross-section of modern PCs, but it's not portable across all compilers, architectures, operating systems, etc.

  1. Just create an array and align it to the stack boundary. Try accessing element arr[-1], or align it to the other extreme. Not guaranteed, but very likely, since the OS won't allow you to access protected memory, or if you're writing to an RODATA segment, that's that.
  2. On Linux, just compile your code with the -fstack-protector-strong, and watch your code deliberately crash when you stack smash. It's a good idea to enable this on test builds of your software during code coverage tests: better to crash in the testing phase and fix it, then to deploy it and have it crash in production.



回答2:


No. Out of bounds access is undefined behavior. UB means anything can happen. On a standard system you can usually find a way that will consistently cause a segfault but in no way is this guaranteed. If you change something else in the code, maybe you will get binary shift changing your stack allocation and changing the result of the program.

As an example, on a PowerPC 5200 (Not a great MMU) running RTEMS 4.9.2, the following code does not create a segfault:

int arr[5];
arr[6] = 10;

In fact even this doesnt create a segfault:

int *p = 0;
while (true)
   *(p--) = 666;

Really, undefined means Undefined.

To do it in a print statement you can do things like

printf("%d", arr[n]) // Out of bounds access
printf("%f", arr[n]) // wrong type access

But i will re-iterate, while this might seg-fault for you in a specific circumstance repeatably, in no way is it guaranteed to always happen that way.

To reliably stop a POSIX system with a SIGSEGV, your best bet is to raise it yourself:

raise(SIGSEGV);

More information about forcing a SIGSEGV signal can be found here: How to programmatically cause a core dump in C/C++

and here:

C++ Creating a SIGSEGV for debug purposes




回答3:


Caveat: This uses an array from a malloc, so technically, it's not quite the same.

But, this will add a "guard" page/area at the end, which always causes a segfault.

I've often used this to debug "off-by-one" array indexing. I've found it to be so useful, that I've added it as part of a malloc wrapper in my production code.

So, if the intent is to come up with something that debugs a real problem, this may help:

// segvforce -- force a segfault on going over bounds

#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <sys/mman.h>

#ifndef PAGESIZE
#define PAGESIZE        4096
#endif

// number of guard pages that follow
// NOTE: for simple increments, one is usually sufficient, but for larger
// increments or more complex indexing, choose a larger value
#ifndef GUARDCNT
#define GUARDCNT        1
#endif

#define GUARDSIZE       (PAGESIZE * GUARDCNT)

// crash_alloc -- allocate for overbound protection
void *
crash_alloc(size_t curlen)
{
    size_t pagelen;
    void *base;
    void *endp;

    pagelen = curlen;

    // align up to page size
    pagelen += PAGESIZE - 1;
    pagelen /= PAGESIZE;
    pagelen *= PAGESIZE;

    // add space for guard pages
    pagelen += GUARDSIZE * 2;

    base = NULL;
    posix_memalign(&base,PAGESIZE,pagelen);
    printf("base: %p\n",base);

    // point to end of area
    endp = base + pagelen;
    printf("endp: %p\n",endp);

    // back up to guard page and protect it
    endp -= GUARDSIZE;
    printf("prot: %p\n",endp);
    mprotect(endp,GUARDSIZE,PROT_NONE);

    // point to area for caller
    endp -= curlen;
    printf("fini: %p\n",endp);

    return endp;
}

// main -- main program
int
main(int argc,char **argv)
{
    int n;
    int *arr;
    int idx;
    int val;

    n = 3;
    arr = crash_alloc(sizeof(int) * n);

    val = 0;
    for (idx = 0;  idx <= n;  ++idx) {
        printf("try: %d\n",idx);
        val += arr[idx];
    }

    printf("finish\n");

    return val;
}



回答4:


As others have noted, you cannot ensure a segmentation fault in the general case and you can try, with an elaborate allocation method, to make it more systematic on some systems.

There is a better way to debug your code and detect this kind of error: there is a very efficient tool just for that: valgrind. Check if it is available for your environment.



来源:https://stackoverflow.com/questions/34888974/is-it-possible-in-c-to-always-make-a-segfault-at-1-over-the-array-size

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!