Edit: There is a related issue being discussed on Github but in another mode of deployment (Typesafe Activator UI and not Docker).
I was trying to simulate a system reboot in order to verify the Docker restart policy which declares to be able to re-run containers in the correct order.
I have a Play framework application written in Java.
The Dockerfile looks like this:
FROM ubuntu:14.04
#
# [Java8, ...]
#
RUN chmod +x /opt/bin/playapp
CMD ["/bin/bash"]
I start it using $ docker run --restart=always -d --name playappcontainer "./opt/bin/playapp"
.
When I $ service docker stop && service docker restart
and then $ docker attach playappcontainer
the console tells me:
Play server process ID is 7
This application is already running (Or delete /opt/RUNNING_PID file)
Edit: Same result when I follow the recommendation of the Play documentation to change the location of the file to /var/run/play.pid with -Dpidfile.path=/var/run/play.pid
.
Play server process ID is 7
This application is already running (Or delete /var/run/play.pid file).
So: Why is the file containing the RUNNING_PID not deleted when the docker daemon stops, gets restartet and restarts previously run containers?
When I $ docker inspect playappcontainer
, it tells me:
"State": {
"ExitCode": 255,
"FinishedAt": "2015-02-05T17:52:39.150013995Z",
"Paused": false,
"Pid": 0,
"Restarting": true,
"Running": true,
"StartedAt": "2015-02-05T17:52:38.479446993Z"
},
Although:
The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL.
from the Docker reference on $ docker stop
To kill a running Play server, it is enough to send a SIGTERM to the process to properly shutdown the application.
from the Play Framework documentation on stopping a Play application
I've just dockerized a Play! application and was also running into this issue - restarting the host caused the Play! application to fail to start in its container because RUNNING_PID
had not been deleted.
It occurred to me that as the Play! application is the only process within its container, always has the same PID, and is taken care of by Docker, the RUNNING_PID
file is (to the best of my knowledge) not actually needed.
As such I overrode pidfile.path
to /dev/null
by placing
javaOptions in Universal ++= Seq(
"-Dpidfile.path=/dev/null"
)
in my project's build.sbt. And it works - I can reboot the host (and container) and my Play! application starts up fine.
The appeal for me of this approach is it does not require changing the way the image itself is produced by sbt-native-packager, just the way the application runs within it.
This works with sbt-native-packager 1.0.0-RC2 and higher (because that release includes https://github.com/sbt/sbt-native-packager/pull/510).
I sorted out a working workaround based on the answers and my further work on this question. If I start the containers as follows, they'll be up after an (un)expected stop/restart. The conflicting RUNNING_PID file won't prevent the container from restarting.
$ sudo docker run --restart=on-failure:5 -d \
--name container my_/container:latest \
sh -c "rm -f /var/run/play.pid && ./opt/bin/start \
-Dpidfile.path=/var/run/play.pid"
What it does is deleting the file containing the process ID which is put at a specific place using an option everytime before running the binary.
I had the exact same problem and worked my way around it by manually deleting the file every time the container would run. In order to do that I added in a companion file "start.bash" I use to start the play process from the results of the SBT dist task, the following line:
find . -type f -name RUNNING_PID -exec rm -f {} \;
Hope it helps.
I don't know much about docker, but Play does not remove RUNNING_PID
on stopping the server as far as I have tested. When I deployed my app in prod
mode and try to stop it by Ctrl+D
and Ctrl+C
it din't remove the RUNNING_PID file from project directory so I had to manually delete it. From Play docs
Normally this(RUNNING_PID) file is placed in the root directory of your play project, however it is advised that you put it somewhere where it will be automatically cleared on restart, such as
/var/run
:
So - apart from manual deletion - the workaround is to change the path of RUNNING_PID and delete it every time the server starts through some script.
$ /path/to/bin/<project-name> -Dpidfile.path=/var/run/play.pid
Make sure that the directory exists and that the user that runs the Play application has write permission for it.
Using this file, you can stop your application using the kill command, for example:
$ kill $(cat /var/run/play.pid)
and you can also try docker command $ sudo docker rm --force redis
Maybe That could help
I ran into the same problem after a ctrl+c failed. I resolved this issue by running docker-compose down -v
and then of course running docker-compose up
. the -v
option is to indicate that you want remove volumes associated to your container. Maybe docker-compose down
would have been suffice.
Here's a rundown of some down
options:
`
Stop services only
docker-compose stop
Stop and remove containers, networks..
docker-compose down
Down and remove volumes
docker-compose down --volumes
Down and remove images
docker-compose down --rmi `
来源:https://stackoverflow.com/questions/28351405/restarting-play-application-docker-container-results-in-this-application-is-alr