We have script which do some processing and triggers a job in background using nohup. When we schedule this script from Oracle OEM (or it can be any scheduler job), i see the following error and show status as failed but the script actually finished without issue. How to exit the script correctly when backup ground job is started with nohup?
Remote operation finished but process did not close its stdout/stderr
file: test.sh
#!/bin/bash
# do some processing
...
nohup ./start.sh 2000 &
# end of the script
By executing start.sh
in this manner you are allowing it to claim partial ownership of test.sh
's output file descriptors (stdout
/stderr
). So whereas when most bash scripts exit, their file descriptors are closed for them (by the operating system), test.sh
's file descriptors cannot be closed because start.sh
still has a claim to them.
The solution is to not let start.sh
claim the same output file descriptors as test.sh
is using. If you don't care about its output, you can launch it like this:
nohup ./start.sh 2000 1>/dev/null 2>/dev/null &
which tells the new process to send both its stdout and stderr to /dev/null
. If you do care about its output, then just capture it somewhere more meaningful:
nohup ./start.sh 2000 1>/path/to/stdout.txt 2>/path/to/stderr.txt &
来源:https://stackoverflow.com/questions/29709790/scripts-with-nohup-inside-dont-exit-correctly