I\'m working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python fun
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile
on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
cat ./test.py | sshpass -p 'password' ssh user@remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script. 2) sshpass used to pass the ssh password to ssh connection
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.