What if malloc fails?

前端 未结 10 563
轮回少年
轮回少年 2020-12-09 08:57

If a malloc allocation fails, should we try it again?

In something like this:

char* mystrdup(const char *s)  
{
    char *ab = NULL;

           


        
相关标签:
10条回答
  • 2020-12-09 09:32

    You are not properly allocating memory malloc function require argument as number of bytes it needs to allocated but you are passing the length of the string so the malloc will reserve memory as per length not per the byte used by your string . You should use sizeof() instead and if still it fails then malloc returns null which represent that there is not enough memory available in the stack to satisfy your requirement. To run your code properly try this:-

    char* mystrdup(const char *s)
    { char *ab = NULL;

    while(ab == NULL) {
        ab=(char*)malloc(sizeof(s));  
    }
    
    strcpy(ab, s);
    return ab;
    

    }

    0 讨论(0)
  • 2020-12-09 09:35

    Its depends on what is your sofware for and what part of your code is affected.

    First to know, malloc() can fail when no pages available, if your application reached it's limit then will not work any loop, but if your system ran out of memory it worth a try, but you must avoid any infinte loops. Surprise! Perfectly normal if the operating system temporarily response that could not allocate more RAM, you can handle at your way.

    This is a very good question anyway

    Similar problem is not capturing SIGNALS and when a multi-threaded or async TCP server got an aborted client connection the software terminated by SIGPIPE. This is normal, but leads to end your program however its should not. To prevent this, you have to hook for signals.

    In realworld examples (my own).

    When malloc fails and only affect partially of my code

    I used to malloc() or new[] when new connection sending data, Im storing received data into buffers, if malloc or realloc fails, the function return as false and free the buffer, the connection dropped with error (lingerie) so in this case the software continue to run but one connection dropped. I think this is a right way here.

    When malloc must cause software abort

    I used to malloc() to make space for critical data, like arrays, structures that defines the core, this is usually run at the begining of the software as init section, if there is malloc() fail the software must abort and exit with error code, because all operation depending on the tables that must filled with data. (Built-In-Filesystem)

    When malloc able to retry

    I had my datalogger software which is some industry-leading type (High-Availability), if malloc() fails I trigger a mutex_lock() which cause software freeze in backend side and trying to retry the malloc procedure for X seconds. If malloc() continues to fail the software starting to call destructors on all threads and performs a complete shutdown except that thread which is tried to malloc() unsuccessful at this point there are two option, malloc() succeed and finish the stack call and exit the last thread or doing a rollback and exit the last thread.

    Whenever is happening the software does not quit as well, try to start from begining and so on..

    Maybe worth to mention ..

    I had exactly the same dilemma years ago, something caused a memory leak in my software and ate all of my RAM but to save the current state I had to write it out which done after many malloc(), the solution was when this happened I closed all threads and calls destructors and saved the data, however the interesting thing was when I closed all connections and free'd up socket, ssl_ctx, ... memory consumption dropped to 128KB, after days of happy debugging I figured it out that SSL_CTX has internal storage and cache. So now when no connection online I free up SSL_CTX and works like a charm.

    SUMM

    So as you can see this is an art, you do what you want with malloc() and anything its just up to you, there is no book and no standard what you should do if malloc() fails. If someone tells you what you should do its just his opinion nothing more.

    my preferred way to avoid infinte loops

    PSEUDO CODE:
    
    var ts = TIME()
    var max_seconds = 5
    var success = true
    
    WHILE (MALLOC() == FAIL) DO
       IF TS + max_seconds < TIME() THEN
          success = false
          BREAK
       END
       SLEEP(100ms)
    END
    
    
    
    0 讨论(0)
  • 2020-12-09 09:37

    Without arguing why or when this would be useful, attempts to reallocate in a loop could work, at least on Windows with 64 bit code, and default pagefile settings. Moreover, this could buy surprisingly more additional virtual memory. Although, do not do this in an infinite loop, but instead use a finite number of retries. As a proof, try the following code that simulates leaking 1 Mb of memory. You have to run it in Release build, preferably not under debugger.

    for (int i = 0; i < 10; i++)
    {
      size_t allocated = 0;
      while (1)
      {
        void* p = malloc(1024 * 1024);
        if (!p)
          break;
    
        allocated += 1;
      }
    
      //This prints only after malloc had failed.
      std::cout << "Allocated: " << allocated << " Mb\n";
      //Sleep(1000);
    }
    

    On my machine with 8 Gb of RAM and system managed pagefile, I get the following output (built with VS2013 for x64 target, tested on Windows 7 Pro):

    Allocated: 14075 Mb
    Allocated: 16 Mb
    Allocated: 2392 Mb
    Allocated: 3 Mb
    Allocated: 2791 Mb
    Allocated: 16 Mb
    Allocated: 3172 Mb
    Allocated: 16 Mb
    Allocated: 3651 Mb
    Allocated: 15 Mb
    

    I don't know exact reason of such behavior, but it seems allocations start failing once the pagefile resizing cannot keep up with requests. On my machine, pagefile grew from 8 gb to 20 Gb after this loop (drops back to 8 Gb after the program quits).

    0 讨论(0)
  • 2020-12-09 09:38

    Try increasing heap size (memory set aside for dynamic allocation).

    0 讨论(0)
提交回复
热议问题