yuv

【图像处理基础】话说图像格式转换

不问归期 提交于 2019-11-30 00:39:35
问题1: COLOR_YUV2BGR_YUY2 这些格式是什么?COLOR_YUV?BGR_YUY2?怎么来的? 问题2: The video camera capture software is customized to work with the UVC driver. The capture is taken in YUY2 format, and may therefore require conversion 从摄像头获取图像之后的处理 cvtColor(frame, frame1, COLOR_YUV2BGR_YUY2); 不太理解说明文档说是捕获的YUY2格式,为什么cvtColor是从COLOR_YUV格式转换成BGR_YUY2格式? 参考 1. cvtColor函数 ; 2. ColorConversionCodes ; 完 来源: https://www.cnblogs.com/happyamyhope/p/11541838.html

Converting YUV420SP to YUV420P

怎甘沉沦 提交于 2019-11-29 23:54:27
问题 How do I to convert YUV420SP to YUV420P using ffmpeg sws_scale or another efficient method? 回答1: There are an enormous amount of different YUV-formats available, as a starting point see http://www.fourcc.org/yuv.php You need to be more specific in you question since "Semi Planar" and "Planar" doesn't really tell how the data is formatted. But as a general advice, it's just raw-data placed contiguously. All you need to do is to read the correct amount of data from the in-stream and write it to

iOS - How to draw a YUV image using openGL

回眸只為那壹抹淺笑 提交于 2019-11-29 23:14:40
Currently, I am trying to draw an image using openGL (the image updates very often, and thus must be redrawn). Previously, I was converting my image from YUV to RGB, and then using this new image to draw with openGL. All worked fine, but the conversion process was not particularly fast. I am now attempting to change the code so that the conversion is taken care of in the openGL shaders. After looking around, I've found a couple code snippets (particularly the shaders and the bulk of my renderImage function) that have helped me get a baseline, but I can't seem to actually get the image to draw

YUV420转YUV444 , YUV420转RGB

两盒软妹~` 提交于 2019-11-29 21:42:27
我想大家应该知道了YUV的颜色表示原理即:   Y = 0.299R + 0.587G + 0.114B   U = -0.147R - 0.289G + 0.436B   V = 0.615R - 0.515G - 0.100B   R = Y + 1.14V   G = Y - 0.39U - 0.58V   B = Y + 2.03U 如果通过上面的公式从RGB转换成YUV的话,得出的YUV一帧图片和RGB一样大(如果都用8bit表示一个像素分量值) 而YUV采样格式有: YUV 4:4:4 YUV 4:2:2 YUV 4:2:0 等 我就介绍下YUV4:2:0采样怎么转换成YUV4:4:4 YUV444示意图(4*4大小):4*4大小的图中每个像素都对应三个YUV分量,如下图。所占空间为 4*4*3=48 bytes YUV4:2:0 采样方式为: U分量和V分量隔行采样, 同时UV分量在其采样行也是隔行采样 Y00 Y01 Y02 Y03 Y10 Y11 Y12 Y13 Y20 Y21 Y22 Y23 Y30 Y31 Y32 Y33 U00 ? U02 ? ? ? ? ? U20 ? U22 ? ? ? ? ? ? ? ? ? V10 ? V12 ? ? ? ? ? V30 ? V32 ? 从上面的比较可知,要将YUV420转换成YUV444的关键就在于插值到采样的缺口处

YUV to RGBA on Apple A4, should I use shaders or NEON?

最后都变了- 提交于 2019-11-29 20:08:43
I'm writing media player framework for Apple TV, using OpenGL ES and ffmpeg. Conversion to RGBA is required for rendering on OpenGL ES, soft convert using swscale is unbearably slow, so using information on the internet I came up with two ideas: using neon (like here ) or using fragment shaders and GL_LUMINANCE and GL_LUMINANCE_ALPHA. As I know almost nothing about OpenGL, the second option still doesn't work :) Can you give me any pointers how to proceed? Thank you in advance. It is most definitely worthwhile learning OpenGL ES2.0 shaders: You can load-balance between the GPU and CPU (e.g.

ffmepg处理10bit 和8bit yuv总结

不问归期 提交于 2019-11-29 09:41:08
ffmepg处理yuv视频的系列之三 最近发现数据集里的yuv大部分是8bit,但是有一部分是10bit或者16bit的,默认的yuv播放器打不开,也不利于数据集制作。所以就想用ffmpeg进行处理,记录一下方法。10bit能够容纳更多的色彩,获得更好的动态范围。 ffmpeg里面yuv的格式定义了很多种,比如下面: PIX_FMT_YUV420P9BE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian PIX_FMT_YUV420P9LE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian PIX_FMT_YUV420P10BE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian PIX_FMT_YUV420P10LE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian PIX_FMT_YUV422P10BE,///< planar

Yuv (NV21) image converting to bitmap [duplicate]

大兔子大兔子 提交于 2019-11-29 08:31:49
This question already has an answer here: Convert NV21 byte array into bitmap readable format [duplicate] 2 answers I am trying to capture images from camera preview and do some drawing on it. The problem is, I have only about 3-4 fps of drawing, and half of the frame processing time is receiving and decoding NV21 image from camera preview and converting to bitmap . I have a code to do this task, which I found on another stack question. It does not seem to be fast, but I do not know how to optimize it. It takes about 100-150 ms on Samsung Note 3, image size 1920x1080. How can I make it work

Converting from YUV colour space to RGB using OpenCV

泄露秘密 提交于 2019-11-29 08:05:33
I am trying to convert a YUV image to RGB using OpenCV. I am a complete novice at this. I have created a function which takes a YUV image as source and converts it into RGB. It is like this : void ConvertYUVtoRGBA(const unsigned char *src, unsigned char *dest, int width, int height) { cv::Mat myuv(height + height/2, width, CV_8UC1, &src); cv::Mat mrgb(height, width, CV_8UC4, &dest); cv::cvtColor(myuv, mrgb, CV_YCrCb2RGB); return; } Should this work? Do I need to convert the Mat into char* again? I am in a loss and any help will be greatly appreciated. There is not enough detail in your

MPP模块及sample_venc分析

寵の児 提交于 2019-11-29 06:49:46
sample的整体架构 1.sample的整体架构: sample中有很多个例程,所以有很多个main函数,common是通用性的主题函数,我们分析的是sample_venc 2.基本的架构是:venc中的main调用venc中的功能函数,再调用common中的功能函数,再调用mpp中的API,再调用HI3518E内部的硬件单元 3.先理解几个基本概念: H.264 H.265 MJPEG 视频编码规范标准 1080P、720P、VGA(640 480) D1(720 576) 视频分辨率(清晰度) fps(frame per second) 帧率 计算机中图像像素格式 RGB: 1.RGB方式表示颜色 (1)RGB有RGB565和RGB888,ARGB等多种子分类 (2)RGB的本质:将色度分解为R、G、B三部分,然后记录下亮度数据 (3)RGB的优势:方便数字化表达,广泛用于数字化彩色显示器,计算机编程等领域。 (4)RGB的劣势:和传统的灰度图兼容不好,表达颜色的效率不高 2.rawRGB和图像采集过程 (1)图像采集的过程:光照在成像物体被反射->镜头汇聚->Sensor光电转换->ADC为rawRGB (2)sensor上每个像素只采集特定颜色的光的强度,因此sensor每个像素只能为R或G或B (3)rawRGB和RGB都是用来描述图像的

sws_scale YUV --> RGB distorted image

别来无恙 提交于 2019-11-29 02:08:24
I want to convert YUV420P image (received from H.264 stream) to RGB , while also resizing it, using sws_scale . The size of the original image is 480 × 800 . Just converting with same dimensions works fine. But when I try to change the dimensions, I get a distorted image, with the following pattern: changing to 481 × 800 will yield a distorted B&W image which looks like it's cut in the middle 482 × 800 will be even more distorted 483 × 800 is distorted but in color 484 × 800 is ok (scaled correctly). Now this pattern follows - scaling will only work fine if the difference between divides by 4.