问题
My MongoDB gets stuck and returning the following error:
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [1548700133:419188][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: /data/db/WiredTiger.turtle.set: handle-open: open: No space left on device
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (22) [1548700133:419251][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: WiredTiger.wt: the checkpoint failed, the system must restart: Invalid argument
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (-31804) [1548700133:419260][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: the process must exit and restart: WT_PANIC: WiredTiger library panic
2019-01-28T18:28:53.419+0000 F - [WTCheckpointThread] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2019-01-28T18:28:53.419+0000 F - [WTCheckpointThread]
***aborting after fassert() failure
2019-01-28T18:28:53.444+0000 F - [WTCheckpointThread] Got signal: 6 (Aborted).
However, my disk has space:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 992M 0 992M 0% /dev
tmpfs 200M 5.7M 195M 3% /run
/dev/xvda1 39G 26G 14G 66% /
tmpfs 1000M 1.1M 999M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/1000
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 253844 322 253522 1% /dev
tmpfs 255835 485 255350 1% /run
/dev/xvda1 5120000 5090759 29241 100% /
tmpfs 255835 10 255825 1% /dev/shm
tmpfs 255835 3 255832 1% /run/lock
tmpfs 255835 16 255819 1% /sys/fs/cgroup
tmpfs 255835 4 255831 1% /run/user/1000
And this would be my docker-compose:
version: "3"
services:
# MariaDB
mariadb:
container_name: mariadb
image: mariadb
ports: ['3306:3306']
restart: always
volumes:
- /home/ubuntu/mysql:/var/lib/mysql
environment:
- "MYSQL_ROOT_PASSWORD=PasswordGoesHere"
command:
# - --memory=1536M
- --wait_timeout=28800
- --innodb_buffer_pool_size=1g
- --innodb_buffer_pool_instances=4
# - --innodb_buffer_pool_chunk_size=1073741824
# APACHE
apache:
container_name: apache
image: apache-php7.1
ports: ['80:80', '443:443']
restart: always
entrypoint: tail -f /dev/null
volumes:
- /home/ubuntu/apache2/apache-config:/etc/apache2/sites-available/
- /home/ubuntu/apache2/www:/var/www/html/
# MONGODB
mongodb:
container_name: mongodb
image: mongo
ports: ['27017:27017']
restart: always
command:
- --auth
volumes:
- /home/ubuntu/moongodb:/data/db
Would it be a problem with my docker-compose.yml? Because I'm using the physical disk and not virtual. I can run the applications and after 1-2 hours the mongo will fail again.
回答1:
If you are running this in centos/RHEL/Amazon Linux you should know that the devicemapper has major issues with releasing inodes in Docker.
Even if you prune the entire docker system, it will still hang on to a lot of inodes, the only way to really solve this is to basically implode docker:
service docker stop
rm -rf /var/lib/docker
service docker start
This should release all your inodes.
I've spent a lot of time on this, Docker really only fully supports Ubuntu overlay2, and the devicemapper, although works, is technically not supported.
回答2:
It looks like 100% of your inodes are in use (from the df -i
output). Try looking for dangling volumes and cleaning them up. Also, it would be a good idea to make sure the docker daemon is using a production-grade storage driver (about storage drivers, choosing a storage driver).
来源:https://stackoverflow.com/questions/54413903/mongodb-no-space-left-on-device-with-docker