On Machine1, I have a Python2.7 script that computes a big (up to 10MB) binary string in RAM that I\'d like to write to a disk file on Machine2, which is a remote machine.
Paramiko supports opening files on remote machines:
import paramiko
def put_file(machinename, username, dirname, filename, data):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(machinename, username=username)
sftp = ssh.open_sftp()
try:
sftp.mkdir(dirname)
except IOError:
pass
f = sftp.open(dirname + '/' + filename, 'w')
f.write(data)
f.close()
ssh.close()
data = 'This is arbitrary data\n'.encode('ascii')
put_file('v13', 'rob', '/tmp/dir', 'file.bin', data)
If just calling a subprocess is all you want, maybe sh.py could be the right thing.
from sh import ssh
remote_host = ssh.bake(<remote host>)
remote_host.dd(_in = <your binary string>, of=<output filename on remote host>)
We can write string to remote file in three simple steps:
string
to a temp file
temp file
to remote host
temp file
Here is my code (without any third parties)
import os
content = 'sample text'
remote_host = 'your-remote-host'
remote_file = 'remote_file.txt'
# step 1
tmp_file = 'tmp_file.txt'
open(tmp_file, 'w').write(content)
# step 2
command = 'scp %s %s:%s' % (tmp_file, remote_host, remote_file)
os.system(command)
# step 3
os.remove(tmp_file)
A solution in which you don't explicitly send your data over some connection would be to use sshfs. You can use it to mount a directory from Machine2 somewhere on Machine1 and writing to a file in that directory will automatically result in the data being written to Machine2.
You open a new SSH process to Machine2 using subprocess.Popen
and then you write your data to its STDIN.
import subprocess
cmd = ['ssh', 'user@machine2',
'mkdir -p output/dir; cat - > output/dir/file.dat']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
your_inmem_data = 'foobarbaz\0' * 1024 * 1024
for chunk_ix in range(0, len(your_inmem_data), 1024):
chunk = your_inmem_data[chunk_ix:chunk_ix + 1024]
p.stdin.write(chunk)
I've just verified that it works as advertised and copies all of the 10485760 dummy bytes.
P.S. A potentially cleaner/more elegant solution would be to have the Python program write its output to sys.stdout
instead and do the piping to ssh
externally:
$ python process.py | ssh <the same ssh command>