问题
I am writing automation code in python to test the behavior of a network application. As such, my code needs to be able to start a process/script (say, tcpdump or a python script) on a server in the network, disconnect, run other processes and then later return and shutdown/evaluate the process started earlier. My network is a mix of windows and linux machines and all of the machines have sshd and python running (via Cygwin for the windows machines).
I've considered a couple of ideas, namely: - Starting a process and moving it to the background via a trailing ampersand (&) - Using screen in some fashion - Using python threads
What else should I be considering? In your experience what have you found to be the best way to accomplish a task like this?
回答1:
nohup for starters (at least on *nix boxes) - and redirect the output to some log file where you can come back and monitor it of course.
回答2:
So now that I understand what you really want, I think the best solution would be to create a lite daemon to run whatever process you want. The daemon could then dump the output to a log file so if it takes a shorter amount of time to run than you expect, you can still get the output. This would be more reliable than fiddling with screen. It should be pointed out however, that this could potentially be a security risk. If this is a task you run every N time periods, cron would work much better.
Original Answer
GNU Screen is a good way to handle this.
Edit: After re-reading your question, I think I misunderstood you. Are you looking to run a python script on the local machine that will connect to a remote one and run some arbitrary process and then be able to connect to the remote machine again and examine a log or something?
回答3:
For a quick, lightweight solution you can't beat screen. If you want to automate things completely it makes sense to set up the process to run as a daemon and write to the system log using syslog. Your system may have a start-stop-daemon script, and for C code it may also have a daemon(3) call in the C library.
回答4:
If you want a script to continue after you leave, you could call it through "nohup" (man nohup). You could also put code in the script you call to have it daemonize itself... it forks, makes the child into a daemon, and then exits. A quick search turned up: http://code.activestate.com/recipes/278731/
回答5:
As @Gandalf mentions, you'll need nohup in addition to the backgrounding &, or the process will be SIGKILLed when the login session terminates. If you redirect your output to a log file, you'll be able to look at it later easily (and not have to install screen on all your machines).
回答6:
Most commercial products install an "Agent" on the remote machines.
In the linux world, you have numerous such agents. rexec and rlogin and rsh all jump to mind.
These are all clients that communication with daemons running on the remote hosts.
If you don't want to use these agents, you can read about them and reinvent these wheels in pure Python. Essentially, the client (rexec for example) communicates with the server (rexecd) to send work requests.
回答7:
Instead of screen, since it's for a single process, I would suggest its lightweight sibling, dtach.
For a fairly complete recipe of dæmonizing a process in Python, see the Creating a daemon the Python way from the ActiveState cookbook.
回答8:
If you are using python to run the automation... I would attempt to automate everything using paramiko. It's a versatile ssh library for python. Instead of going back to the output, you could collect multiple lines of output live and then disconnect when you no longer need the process and let ssh do the killing for you.
来源:https://stackoverflow.com/questions/923691/how-to-start-a-process-on-a-remote-server-disconnect-then-later-collect-output