How do I limit resources for ffmpeg, called from a python-script, running in a docker container?

…衆ロ難τιáo~ 提交于 2021-01-28 11:44:48

问题


I deployed a service, that periodically does video encoding on my server; And every time it does, all other services slow down significantly. The encoding is hidden under multiple layers of abstraction. Limiting any of those layers would be fine. (e.g. limiting the docker-container would work just as well as limiting the ffmpeg-sub process.)

My Stack:

  1. VPS (ubuntu:zesty)
  2. docker-compose
  3. docker-container (ubuntu:zesty)
  4. python
  5. ffmpeg (via subprocess.check_call() in python)

What I want to limit:

  • CPU: single core
  • RAM: max 2 GB
  • HDD: max 4 GB

It would be possible to recompile ffmpeg if needed.

What would be the place to put limits in this stack?


回答1:


In plain docker you can achieve each of the limit with command line options:

A container can be limited to a single CPU core (or a hyperthread on current intel hardware):

docker run \
  --cpus 1 \
  image

or limited by Dockers CPU shares, which default to 1024. This will only help if most of your tasks that are being slowed down are also in Docker containers, so they are being allocated Dockers shares as well.

docker run \
  --cpu-shares 512 \
  image

Limiting memory is a bit finicky as your process will just crash if it hits the limit.

docker run \
  --memory-reservation 2000 \
  --memory 2048 \
  --memory-swap 2048 \
  image

Block or Device IO is more important than total space for performance. This can be limited per device, so if you keep data on a specific device for your conversion:

docker run \
  --volume /something/on/sda:/conversion \
  --device-read-bps /dev/sda:2mb \
  --device-read-iops /dev/sda:1024 \
  --device-write-bps /dev/sda:2mb \
  --device-write-iops /dev/sda:1024 \
  image 

If you want to limit total disk usage as well, you will need to have the correct storage setup. Quotas are supported on the devicemapper, btrfs and zfs storage drivers, and also with the overlay2 driver when used on an xfs file system that is mounted with the pquota option.

docker run \
   --storage-opt size=120G
   image

Compose/Service

Docker compose v3 seems to have abstracted some of these concepts away to what can be applied to a service/swarm so you don't get the same fine grained control.

For a v3 file, use the resources object to configure limits and reservations for cpu and memory:

services:
  blah:
    image: blah
    deploy:
      resources:
        limits:
          cpu: 1
          memory: 2048M
        reservations:
          memory: 2000M

Disk based limits might need a volume driver that supports setting limits.

If you can go back to a v2.2 Compose file you can use the full range of constraints on a container at the base level of the service which are analogous to the docker run options:

cpu_count, cpu_percent, cpu_shares, cpu_quota, cpus, cpuset, mem_limit, memswap_limit, mem_swappiness, mem_reservation, oom_score_adj, shm_size




回答2:


You can do it easily with your docker compose file :)

https://docs.docker.com/compose/compose-file/#resources

Just use the limits keyword and set your cpu usage !




回答3:


What I want to limit:

CPU: single core

RAM: max 2 GB

HDD: max 4 GB

Other answers have tackled this from the perspective of docker, which actually may be your best approach in this situation, but here is a little more insight on ffmpeg for you:

General

There is no ffmpeg option for limiting CPU, RAM and HDD specifically, you have to know quite a lot about transcoding to hit metrics as specifically as you're requesting and without any information on the input file(s) and output file(s) its impossible to give you specific advice. Encoding and decoding take varying resources based on where they are coming from and going to.

CPU

The closest thing you have here is the -threads option, which will limit the total number of threads (not CPU cores) used, or you can supply 0 to allow maximum threads. Again, different encoders/decoders/codecs have different limitations on this.

RAM

No luck here, again, based on your media and codec choices.

HDD

I haven't done this before but take a look at this article. If that doesn't work you need to do research on your overall output bitrate and compare it to the input video duration. The -t option can be used to limit an output based on time duration (or limit reading from an input)

Lastly

... all other services slow down significantly

This is expected, ffmpeg tries to take up as much of your machine's resources as the transcode will allow, the best bet is to move transcodes to a separate server especially considering it is already in a docker container.




回答4:


Your best bet is to write small set of scripts around cgroups; either on standalone linux or along-with docker containers.

For former, it is basically done by creating a new cgroup; specifying resources for it and the moving your main process pid to the created cgroup. Detailed intructions are at https://www.cloudsigma.com/manage-docker-resources-with-cgroups/.

For latter, see https://www.cloudsigma.com/manage-docker-resources-with-cgroups/



来源:https://stackoverflow.com/questions/45274316/how-do-i-limit-resources-for-ffmpeg-called-from-a-python-script-running-in-a-d

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!