Laravel queues getting “killed”

烈酒焚心 提交于 2020-06-13 19:16:11

问题


Sometimes when I'm sending over a large dataset to a Job, my queue worker exits abruptly.

// $taskmetas is an array with other arrays, each subsequent array having 90 properties.
$this->dispatch(new ProcessExcelData($excel_data, $taskmetas, $iteration, $storage_path));

The ProcessExcelData job class creates an excel file using the box/spout package.

  • in the 1st example $taskmetas has 880 rows - works fine
  • in the 2nd example $taskmetas has 10,000 rows - exits abruptly

1st example - queue output with a small dataset:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 02:44:48] Processing: App\Jobs\ProcessExcelData
[2017-08-07 02:44:48] Processed:  App\Jobs\ProcessExcelData

2nd example - queue output with a large dataset:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 03:18:47] Processing: App\Jobs\ProcessExcelData
Killed

I don't get any error messages, logs are empty, and the job doesn't appear in the failed_jobs table as with other errors. The time limit is set to 1 hour, and the memory limit to 2GBs.

Why are my queues abruptly quitting?


回答1:


You can try with giving a timeout. For eg. php artisan queue:work --timeout=120

By default, the timeout is 60 seconds, so we forcefully override the timeout as mentioned above




回答2:


I know this is not what you are looking for. but i have same problem and i think it's happen bcs of OS ( i will change it if i found the exact reason ) but lets check

queue:listen

instead of

queue:work

the main different between this two is that the queue:listen run Job class codes per job ( so you dont need to restart your workers or supervisor) but the queue:work use cache system and work very faster than queue:listen and OS can not handle this speed and prepare queue connection ( in my case Redis )

queue:listen command will run queue:work in it self ( you can check this from your running process in htop or .. )

But the reason of telling you to check queue:listen command , bcs of the speed . OS can work easily with this speed and have no problem to handle your queue connection and ... ( in my case there is no silent kill any more )

to know if you have my problem , you can change your queue driver to "sync" from .env and see if it's kill again or not - if it's not killed , you can know that the problem is on preparing queue connection for use

  • to know if you have memory problem run your queue with listen method or sync and php will return an Error for that, then you can increase your memory to test it again

  • you can use this code to give more memory for testing in your code

    ini_set('memory_limit', '1G');//1 GIGABYTE
    



回答3:


This worked for me:

I had a Supervisord job:

Job ID, Queue, Processes, Timeout, Sleep Time, Tries, Action Job_1,
Default, 1, 60, 3, 3

https://laravel.com/docs/5.6/queues#retrying-failed-jobs says:

To delete all of your failed jobs, you may use the queue:flush command:

php artisan queue:flush

So I did that (after running php artisan queue:failed to see that there were failed jobs).

Then I deleted my Supervisord job and created a new one like it but with 360 second timeout.

Also important to remember was restarting the Supervisord job (within the control panel of my Cloudways app) and restarting the entire Supervisord process (within the control panel of my Cloudways server).

After trying to run my job again, I noticed it in the failed_jobs table and read that the exception was related to cache file permissions, so I clicked the Reset Permission button in my Cloudways dashboard for my app.




回答4:


There are 2 options. Either running out of memory or exceeding execution time.

Try $ dmesg | grep php This will show you more details

Increase max_execution_time and/or memory_limit in your php.ini file.




回答5:


Sometimes you work with resource-intensive processes like image converting or BIG excel file creating/parsing. And timeout option is not enough for this. You can set public $timeout = 0; in your job but it still killed because of memory(!). By default memory limit is 128 MB. To fix it just add --memory=256 (or heigher) option to avoid this problem.

BTW:

The time limit is set to 1 hour, and the memory limit to 2GBs

This applying only for php-fpm in your case but not for queue process worker.



来源:https://stackoverflow.com/questions/45539032/laravel-queues-getting-killed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!