PHP fwrite() file leads to internal server error

ぃ、小莉子 提交于 2019-12-11 02:43:38

问题


i want to upload large files to my server. I splice these files before upload in max. 1MB chunks. If one chunk is uploaded, this chunk gets appended to the file. On my local server everything works well, but if i test this script on my Webserver (hosted by Strato -.-) the process quits with an Internal Server Error everytime the appended file on the server gets 64MB large. I think its (of course?) caused because of a restriction from Strato. Maybe something with the memory but i cant explain to myself why this happens. This is the script (PHP version 5.6):

$file = $_FILES['chunk'];
$server_chunk = fopen($uploadDir.$_POST['file_id'], "ab");
$new_chunk = fopen($file['tmp_name'], "rb");

while(!feof($new_chunk)) //while-loop is optional
{
     fwrite($server_chunk, fread($new_chunk, 1024));
};
fclose($new_chunk);
fclose($server_chunk);

In my opinion there is no line in this code where the file gets loaded in the memory to cause this error or could something different cause this error?

I checked te server-logs but there is no entry if this error happens.

php.ini

I can create multiple 63MB files. Only if the files exceed 64MB the server aborts.

UPDATE: I wrote the following script to concatenate the filechunks on the server with cat. But I always get a 8192B File back. Is something wrong with this script? $command is something like:
/bin/cat ../files/8_0 ../files/8_1 ../files/8_2 ../files/8_3

$command = '/bin/cat';
foreach($file_array AS $file_info)
{
    $command = $command.' ../files/'.$file_info['file_id'].'_'.$file_info['server_chunkNumber'];
}
$handle1 = popen($command, "r");
$read = fread($handle1, $_GET['size']);
echo $read;

I checked the result. The bytes in the 8192B file are exactly the same as of the beginning of the original file. So something seems to work...

Update: I found this.

Update:

$handle1 = popen($command, "r");
while(!feof($handle1))  
{ 
    $read = fread($handle1, 1024);
    echo $read;
}

This works, i can piecewise read from the handle. But of course this way im running into timeout limits. How can i pass the file to the client? If this question is answered all of my problems are gone ;)


回答1:


(see update at bottom)

A memory limit error would look different (but you can write a script that continuously allocates memory by adding large objects to a growing array, and see what happens). Also, the memory limit is related to the PHP core, plus script, plus structures, plus any file content; even if a file appended to were to be loaded or counted against memory limit (maybe through mmap, even if it appears weird), it is unlikely at best that the limit should chance at 64 megabytes.

So this looks like a filesystem limitation on a single file size to me. There are several cloud file systems that have such limitations; but I know of none such on a local disk storage. I'd ask clues to the hosting tech support.

Some attempts that I'd make:

  • double check $uploadDir unless it was assigned by you personally.

  • try creating the file in a different path than $uploadDir, unless certainly on the same filesystem

  • try checking for errors:

    if (!fwrite($server_chunk, fread($new_chunk, 1024))) {
       die("Error writing: ");
    }
    
  • paranoid check on phpinfo() to ensure there isn't some really weird weirdness such as function overriding. You can investigate by enumerating the functions and checking them out (spoofable, yes, but unlikely to have been).

UPDATE

it REALLY looks like a filesize limitation, unrelated to PHP. Earlier users of the same hosting reported 32 Mb. See here and here. There's people unable to run mysqldump, or tar backups. It has nothing to do with PHP directly.

Workaround... which seems to work!

You can perhaps work around this problem by storing the file in chunks, and downloading it in several installments or by passing Apache a pipe to the cat program, if available.

What would happen is that you would store file123.0001, file123.0002, ..., and then upon download check all the fragments, send the appropriate Content-Length, build a commandline for /bin/cat (if accessible...) and connect the stream to the server. You may still run into time limits, but it's worth a shot.

Example:

<?php
    $pattern    = "test/zot.*"; # BEWARE OF SHELL METACHARACTERS
    $files      = glob($pattern);
    natsort($files);
    $size       = array_sum(array_map('filesize', $files));
    ob_end_clean();
    Header("Content-Disposition: attachment;filename=\"test.bin\";");
    Header("Content-Type: application/octet-stream");
    Header("Content-Length: {$size}");
    passthru("/bin/cat {$pattern}");

I have tested the above and it downloads a single 120Mb file from a bunch of 10-Mb chunks, ordered like zot.1, zot.2, ..., zot.10, zot.11, zot.12 (and yes, I did not use natsort at first). If I can find the time I'll run it again in a VM with throttled network, so that the script has 10s time limit and the download would take 20s. It's possible that PHP won't terminate the script until passthru returns, since I noticed PHP's timekeeping is not very intuitive.

The following code runs with a time limit of three seconds. It runs a command that takes four seconds, then sends the output to the browser, keeps running until its time is exhausted.

<pre>
<?php
    print "It is now " . date("H:i:s") . "\n";
    passthru("sleep 4; echo 'Qapla!'");
    print "It is now " . date("H:i:s") . "\n";
    $x = 0;
    for ($i = 0; $i < 5; $i++) {
        $t = microtime(true);
        while (microtime(true) < $t + 1.0) {
            $x++;
        }
        echo "OK so far. It is now " . date("H:i:s") . "\n";
    }

The result is:

It is now 20:52:56
Qapla!
It is now 20:53:00
OK so far. It is now 20:53:01
OK so far. It is now 20:53:02
OK so far. It is now 20:53:03


( ! ) Fatal error: Maximum execution time of 3 seconds exceeded in /srv/www/rumenta/htdocs/test.php on line 9

Call Stack
#   Time    Memory  Function    Location
1   0.0002  235384  {main}( )   ../test.php:0
2   7.0191  236152  microtime ( )   ../test.php:9

Of course, it is possible that Strato uses a stronger check on a script's running time. Also, I have PHP installed as a module; possibly for CGIs, which run as independent processes, different rules apply.



来源:https://stackoverflow.com/questions/26000277/php-fwrite-file-leads-to-internal-server-error

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!