How to perform an On-the-Fly encoding of a stream of still-pictures (video) for sending these from C# to Python? [closed]

我只是一个虾纸丫 提交于 2019-12-23 02:01:59

问题


I'm getting both Depth & Color frames from the Kinect 2, using the Kinect SDK ( C# ), and I'm sending them to Python clients using ZeroMQ.

this.shorts     = new ushort[ 217088]; //  512 *  424
this.depthBytes = new   Byte[ 434176]; //  512 *  424 * 2
this.colorBytes = new   Byte[4147200]; // 1920 * 1080 * 4

public void SendDepthFrame(DepthFrame depthFrame)
    {
        depthFrame.CopyFrameDataToArray(this.shorts);
        Buffer.BlockCopy(shorts, 0, this.depthBytes, 0, this.depthBytes.Length);
        this.depthPublisher.SendByteArray(this.depthBytes);
    }

public void SendColorFrame(ColorFrame colorFrame, WriteableBitmap map)
    {
        colorFrame.CopyRawFrameDataToArray(this.colorBytes);
        this.colorPublisher.SendByteArray(this.colorBytes);
    }

Since I'm sending uncompressed data, I'm overloading the network and I'd like to compress these frames.

Is this possible for a continuous stream-processing?

I know that I can do that by compressing in a PNG/JPEG format, but I would like to maintain the notion of video stream.

The goal is to send the compressed data in C#, and then decoding them in Python.

Is there any libs that allow to do that ?


回答1:


May forget about compression for the moment and downscale for PoC

If your design indeed makes sense, try to focus rather on core CV-functionality first, at a cost of reduced ( downscaled ) FPS, colordepth, resolution ( in this order of priority ).

Your indicated data produces about 1 Gbps exgress data-stream, where the forthcoming CV-processing will choke anyways, having remarkable CV-process performance ( delay / latency ) / interim data-representations' memory-management bottlenecks.

This said, the PoC may benefit from 1/4 - 1/10 slower FPS acquisition/stream-processing and the finetuned solution may show you, how many nanoseconds-per-frame does your code have in stream-processing margin ( to finally decide if there is time & processing-power enough to include any sort of CODEC-processing into the otherwise working pipeline )

check the lower-left window delays in [usec] by a right-click -> [Open in a New Tab]
to see enlarged and realise a scale / order of magnitude of a few actual openCV procesing latencies of about a 1/4 of your one FullFD still image in a real-world processing with much smaller FPS on a single-threaded i7/3.33 GHz device, where L3 cache sizes can carry as much as 15 MB of imagery-data with fastest latencies of less than 13 ns ( core-local access case ) .. 40 ns ( core-remote NUMA access case ) + block-nature of the CV-orchestrated image-processing benefits a lot from minimal if not zero cache-miss-rate -- but this is not a universal deployment hardware scenario to rely on:
The costs ( penalty ) of each cache-miss and a need to ask for and peform an access to data in the main DDR-RAM is about +100 ns >>> https://stackoverflow.com/a/33065382/3666197

Without a working pipeline, there are no quantitative data about the sustained stream-processing / it's margin-per-frame to decide the CODEC-dilemma a-priori of the proposed PoC-implementation.



来源:https://stackoverflow.com/questions/37391013/how-to-perform-an-on-the-fly-encoding-of-a-stream-of-still-pictures-video-for

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!