Piping Binary Data from Python GUI to c++ and back again

北城余情 提交于 2019-12-24 17:27:31

问题


I have been working on a project that pipes data back and forth between a QGIS plugin written in Python and some image processing code I have written in C++. After some community help via the following 2 questions I have gotten a console based sample code working that simulates an image, pipes it over to my C++ code, does some nontrivial processing, and then pipes it back and unpacks it correctly. I am having trouble porting this to a GUI style code.

previous questions for reference:

Data corruption Piping between C++ and Python

piping binary data between python and c++

The current (and I firmly believe last) hurdle left is that when I move from a PyDev environment to my QGIS plugin (which is GUI based) my ability to place stdin and stdout in Binary mode raises an error.

my working Python code that I run from PyDev is:

import subprocess
import struct
import sys
import os, msvcrt
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
import numpy as np
import datetime

#set up the variables needed 
bytesPerDouble = 8
sizeX = 2000
sizeY = 2000
offset = sizeX*sizeY
totalBytesPerArray = sizeX*sizeY*bytesPerDouble
totalBytes = totalBytesPerArray*2                   #the 2 is because we pass 2 different versions of the 2D array

#setup the testing data array 
a = np.zeros(sizeX*sizeY*2, dtype='d')
for i in range(sizeX):
    for j in range(sizeY):
        a[j+i*sizeY] = i
        a[j+i*sizeY+offset] = i
        if i % 10 == 0:
           a[j+i*sizeY+offset] = j

data = a.tobytes('C')      

strTotalBytes = str(totalBytes)
strLineBytes  = str(sizeY*bytesPerDouble)

#communicate with c++ code
print("starting C++ code")     
command =   "C:\Python27\PythonPipes.exe"
proc = subprocess.Popen([command, strTotalBytes, strLineBytes, str(sizeY), str(sizeX)], stdin=subprocess.PIPE,stderr=subprocess.PIPE,stdout=subprocess.PIPE)

ByteBuffer = (data)
proc.stdin.write(ByteBuffer)

starttime = datetime.datetime.now()

print("Getting Status from C++")
for i in range(sizeX-6):
    returnvalues = proc.stdout.read(4)
    a = buffer(returnvalues)
    b = struct.unpack_from('i', a)
    currenttime = datetime.datetime.now()
    deltatime = (currenttime-starttime).total_seconds()
    print "ETA for processing completion " + str( ((sizeX + 0.0) / (b[0] + 0.0) - 1.0)*deltatime) + " Seconds"


print("Reading results back from C++")
for i in range(sizeX):
    returnvalues = proc.stdout.read(sizeY*bytesPerDouble)
    a = buffer(returnvalues)
    b = struct.unpack_from(str(sizeY)+'d', a)
    print str(b) + " " + str(i)

print('done')

when I move this code over to my QGIS module (which is built with Plugin Builder and QtCreator) using this code:

import resources
from qgis.core import *
import qgis.utils
import time
import struct
import subprocess
import os, msvcrt
import sys

msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)

import datetime




class CreateRasterLayer:


    def __init__(self, ex_iface):
        self.iface = ex_iface


    def setupImageSpacing(self):
        #grab the layers in QGIS
        layers = self.iface.legendInterface().layers()

        #grab "gsd" per pixel
        self.deltaX = (layers[0].extent().xMaximum() -layers[0].extent().xMinimum())/layers[0].width()
        self.deltaY = (layers[0].extent().yMaximum() -layers[0].extent().yMinimum())/layers[0].height()

        #get overlap of layers
        self.AOI = layers[0].extent().intersect(layers[1].extent())

        self.width  = int((self.AOI.xMaximum() - self.AOI.xMinimum()) // self.deltaX)
        self.height = int((self.AOI.yMaximum() - self.AOI.yMinimum()) // self.deltaY)

        print(self.AOI.asWktCoordinates())
        print(self.width)
        print(self.height)
        entropyRadius = 7
        self.CalculateEntropy(layers, entropyRadius)  

    def CalculateEntropy(self, layers, radius):
        #variables used in this Python Script
        bytesPerNumber = 8                          # Python uses 8 bytes for all floating point numbers
        bytesPerColumn = self.height*bytesPerNumber # This is the number of bytes in one column of data. data will be passed to c++ program 1 column at a time over pipes
        bytesPerLayer = self.width* bytesPerColumn  # This is the number of bytes in one of the layers that is going to e sent to the c++ program
        RedBytes = bytearray(bytesPerLayer*2)       # Byte array that will be passed to the C++ array. this holds the Red channel data for the layer for testing purposes.

        #arguments that will be passed to the c++ program  
        strTotalBytes = str(bytesPerLayer*2)        #number of bytes that will be passed to the C++ program total
        strLineBytes  = str(bytesPerColumn)         #number of bytes that will be passed to the C++ program at a time
        strHeight       = str(self.height)               #number of doubles that are being passed to the C++ program at a time
        strWidth        = str(self.width)                #number of data data passes to the C++ program need to completely transfer one layer object

        #start the C++ program 
        print("starting C++ code for processing layers")   
        command = ["C:\Python27\PythonPipes.exe", strTotalBytes , strLineBytes , strHeight, strWidth]

        proc = subprocess.Popen( command, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)

        print("Piping layer 1 -> C++")
        print(time.ctime(time.time()))
        currentlayer = 0
        value = bytearray(bytesPerColumn)

        #read the data in the first layer
        for i in range(self.width):
            for j in range(self.height):
                #calculate the index of the byte array where this data should be placed
                currentindex = bytesPerLayer*currentlayer + (j+i*self.height)*bytesPerNumber 
                #retrieve the value of the pixel at the current raster  layer location
                pixelData0 = layers[0].dataProvider().identify(QgsPoint(self.AOI.xMinimum()+i*self.deltaX, self.AOI.yMinimum()+j*self.deltaY),QgsRaster.IdentifyFormatValue)
                #convert the pixel data for the Red Channel into it's 'C' byte ordering and assign it to the RedByte Array
                RedBytes[ currentindex: currentindex + bytesPerNumber] = struct.pack('d',pixelData0.results()[1])
            #copy bytes into the Buffer for piping    
            RedBuffer = buffer(RedBytes, bytesPerLayer*currentlayer + (i*self.height)*bytesPerNumber, bytesPerColumn)
            #pipe the data for the whole column over to the C++ program
            proc.stdin.write(RedBuffer)

        print("Piping layer 2 -> C++")
        print(time.ctime(time.time()))
        currentlayer += 1        

        #comments the same as the previous loops
        for i in range(self.width):
            for j in range(self.height):
                currentindex = bytesPerLayer*currentlayer + (j+i*self.height)*bytesPerNumber 
                pixelData1 = layers[1].dataProvider().identify(QgsPoint(self.AOI.xMinimum()+i*self.deltaX, self.AOI.yMinimum()+j*self.deltaY),QgsRaster.IdentifyFormatValue)
                RedBytes[ currentindex: currentindex + bytesPerNumber] = struct.pack('d',pixelData1.results()[1])
            RedBuffer = buffer(RedBytes, bytesPerLayer*currentlayer + (i*self.height)*bytesPerNumber, bytesPerColumn)
            proc.stdin.write(RedBuffer)

        print("Data piped")            
        print(time.ctime(time.time()))

        starttime = datetime.datetime.now()
        print("Getting Status from C++")
        for i in range(self.width-6):
                returnvalues = proc.stdout.read(4)
                a = buffer(returnvalues)
                b = struct.unpack_from('i', a)
                currenttime = datetime.datetime.now()
                deltatime = (currenttime-starttime).total_seconds()
                print "ETA for processing completion " + str( ((self.width + 0.0) / (b[0] + 0.0) - 1.0)*deltatime) + " Seconds"

        #receive the data and process it here                 

        print("Reading results back from C++")
        for i in range(self.width):
            returnvalues = proc.stdout.read(bytesPerColumn)
            a = buffer(returnvalues)
            b = struct.unpack_from(str(self.height)+'d', a)
            print str(b) + " " + str(i)

I get the following error message:

msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) AttributeError: writeOut instance has no attribute 'fileno'

The last time I had a stdin/stdout error with this project the problem was with the popen statement. popen required that I pipe all three std streams when I called it from a GUI code. Since this is related to the std streams and the code works in PyDev on the console I am assuming that it is a GUI code related problem right now.

I would like to know the proper way to set std streams to binary mode from a GUI code, or any other ideas that could resolve the error issue.

Update I found a work around, but will leave the question open in case someone has a real fix. I have found a work around for this problem. I just removed the code that was trying to set std streams to binary mode from python and left them in my C++ code as the first thing that happens. When the python code calls the popen line, the C++ process sets the std streams to binary mode for me. I would still like to know if Python has a way to get around this error

来源:https://stackoverflow.com/questions/37007900/piping-binary-data-from-python-gui-to-c-and-back-again

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!