How to reduce virtual memory by optimising my PHP code?

£可爱£侵袭症+ 提交于 2019-12-06 00:16:17
thirtydot

As per your last comment..

Download RollingCurl.php.

Hopefully this will sufficiently spam the living daylights out of your API.

<?php

$url = '________';
$fetch_count = 150;
$window_size = 5;


require("RollingCurl.php");

function request_callback($response, $info, $request) {
    list($result0, $result1) = explode("<br>", $response);
    echo "{$result0}<br>{$result1}<br>";
    //print_r($info);
    //print_r($request);
    echo "<hr>";
}


$urls = array_fill(0, $fetch_count, $url);

$rc = new RollingCurl("request_callback");
$rc->window_size = $window_size;
foreach ($urls as $url) {
    $request = new RollingCurlRequest($url);
    $rc->add($request);
}
$rc->execute();

?>

Looking through your questions, I saw this comment:

If the intention is domain snatching, then using one of the established services is a better option. Your script implementation is hardly as important as the actual connection and latency.

I agree with that comment.

Also, you seem to have posted the "same question" approximately seven hundred times:

https://stackoverflow.com/users/558865/icer
https://stackoverflow.com/users/516277/icer

How can I adjust the server to run my PHP script quicker?
How can I re-code my php script to run as quickly as possible?
How to run cURL once, checking domain availability in a loop? Help fixing code please
Help fixing php/api/curl code please
How to reduce virtual memory by optimising my PHP code?
Overlapping HTTPS requests?
Multiple https requests.. how to?

Doesn't the fact that you have to keep asking the same question over and over tell you that you're doing it wrong?

This comment of yours:

@mario: Cheers. I'm competing against 2 other companies for specific ccTLD's. They are new to the game and they are snapping up those domains in slow time (up to 10 seconds after purge time). I'm just a little slower at the moment.

I'm fairly sure that PHP on a shared hosting account is the wrong tool to use if you are seriously trying to beat two companies at snapping up expired domain names.

The result of each of the 150 queries is being stored in PHP memory and by your evidence this is insufficient. The only conclusion is that you cannot keep 150 queries in memory. You must have a method of streaming to files instead of memory buffers, or simply reduce the number of queries and processing the list of URLs in batches.

To use streams you must set CURLOPT_RETURNTRANSFER to 0 and implement a callback for CURLOPT_WRITEFUNCTION, there is an example in the PHP manual:

http://www.php.net/manual/en/function.curl-setopt.php#98491

function on_curl_write($ch, $data)
{
  global $fh;
  $bytes = fwrite ($fh, $data, strlen($data));
  return $bytes;
}

curl_setopt ($curl_arr[$i], CURLOPT_WRITEFUNCTION, 'on_curl_write');

Getting the correct file handle in the callback is left as problem for the reader to solve.

<?php

echo str_repeat(' ', 1024); //to make flush work

$url = 'http://__________/';
$fetch_count = 15;
$delay = 100000; //0.1 second
//$delay = 1000000; //1 second


$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);


for ($i=0; $i<$fetch_count; $i++) {

    $start = microtime(true);

    $result = curl_exec($ch);

    list($result0, $result1) = explode("<br>", $result);
    echo "{$result0}<br>{$result1}<br>";
    flush();

    $end = microtime(true);

    $sleeping = $delay - ($end - $start);
    echo 'sleeping: ' . ($sleeping / 1000000) . ' seconds<hr />';
    usleep($sleeping);

}

curl_close($ch);

?>
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!