yuv

How to convert YUV420P image to JPEG using ffmpeg's libraries?

帅比萌擦擦* 提交于 2019-12-02 06:31:55
I'm trying to convert a YUV420P image ( AV_PIX_FMT_YUV420P ) to a JPEG using ffmpeg's libavformat and libavcodec . This is my code so far: AVFormatContext* pFormatCtx; AVOutputFormat* fmt; AVStream* video_st; AVCodecContext* pCodecCtx; AVCodec* pCodec; uint8_t* picture_buf; AVFrame* picture; AVPacket pkt; int y_size; int got_picture=0; int size; int ret=0; FILE *in_file = NULL; //YUV source int in_w = 720, in_h = 576; //YUV's width and height const char* out_file = "encoded_pic.jpg"; //Output file in_file = fopen(argv[1], "rb"); av_register_all(); pFormatCtx = avformat_alloc_context(); fmt =

manipulating luma in YUV color space

耗尽温柔 提交于 2019-12-02 05:33:36
I wont to set contrast / brightnes on image witch is in form byte[]. The image is in YCbCr_420 color space (android camera). I am geting luma value this way : for (int j = 0, yp = 0; j < height; j++) { for (int i = 0; i < width; i++, yp++) { int y = (0xff & (yuv420sp[yp])) - 16; } } How to manipulate y value to set more light? I am also not sure if this is gut way to set back the value : yuv420sp[yp] = (byte) ((0xff & y) +16); Thanx for any help. The little that I know from this API is that the values for the 3 channels are concatenated in a byte array. So, likewise in Windows working with RGB

QOMX_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka converter

烂漫一生 提交于 2019-12-02 04:44:37
I need to handle YUV data from H/W decoding output on Android. Actually, I'm using Nexus4 and the decoding output format is QOMX_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka type. But I need YUV420 Planar format data, it need to be converted. Could you share the converting function or any way? 来源: https://stackoverflow.com/questions/21797923/qomx-color-formatyuv420packedsemiplanar64x32tile2m8ka-converter

YUV420 to BGR image from pixel pointers

拜拜、爱过 提交于 2019-12-02 00:05:38
I am capturing raw output from a decoder which is YUV420. I have got three pointers: Y(1920*1080), U(960*540) and V(960*540) separately. I want to save the image as JPEG using OpenCV. I tried using cvtcolor of opencv cv::Mat i_image(cv::Size(columns, rows), CV_8UC3, dataBuffer); cv::Mat i_image_BGR(cv::Size(columns, rows), CV_8UC3); cvtColor(i_image, i_image_BGR, cv::COLOR_YCrCb2BGR); cv::imwrite("/data/data/org.myproject.debug/files/pic1.jpg", i_image_BGR); But, here is the output image which is saved: Can someone please suggest what is the proper way of saving the image? YUV Binary files for

YUV420 to BGR image from pixel pointers

回眸只為那壹抹淺笑 提交于 2019-12-01 22:17:28
问题 I am capturing raw output from a decoder which is YUV420. I have got three pointers: Y(1920*1080), U(960*540) and V(960*540) separately. I want to save the image as JPEG using OpenCV. I tried using cvtcolor of opencv cv::Mat i_image(cv::Size(columns, rows), CV_8UC3, dataBuffer); cv::Mat i_image_BGR(cv::Size(columns, rows), CV_8UC3); cvtColor(i_image, i_image_BGR, cv::COLOR_YCrCb2BGR); cv::imwrite("/data/data/org.myproject.debug/files/pic1.jpg", i_image_BGR); But, here is the output image

iOS RGBA转YV12

可紊 提交于 2019-12-01 22:02:57
引言 因为项目中要做画面共享,所以需要学一点图像相关的知识,首当其冲就是RGB转YUV了,因为图像处理压缩这一块是由专业对口的同事做的,所以呢,我这就是写一下自己的理解,如有不对的地方,还望指正,谢谢。 你可以在 这里 看到更好的排版。 正文 知识准备 RGB 三原色光模式 ( RGB color model ),又称 RGB颜色模型 或 红绿蓝颜色模型 ,是一种 加色模型 ,将 红 ( R ed)、 绿 ( G reen)、 蓝 ( B lue)三 原色 的色光以不同的比例相加,以合成产生各种色彩光。 RGB32 RGB32使用32位来表示一个像素,RGB分量各用去8位,剩下的8位用作Alpha 通道 或者不用。(ARGB32就是带Alpha通道的RGB24。)注意在内存中RGB各分量的排列顺序为:BGRA BGRA BGRA…。通常可以使用RGBQUAD数据结构来操作一个像素,它的定义为: typedef struct tagRGBQUAD { BYTE rgbBlue; // 蓝色分量 BYTE rgbGreen; // 绿色分量 BYTE rgbRed; // 红色分量 BYTE rgbReserved; // 保留字节(用作Alpha通道或忽略) } RGBQUAD。 YUV YUV ,是一种 颜色 编码 方法。常使用在各个影像处理组件中。 YUV在对照片或影片编码时

How to give YUV data input to Opencv?

☆樱花仙子☆ 提交于 2019-12-01 09:45:09
I am a beginner to Opencv. In my new opencv project I have to capture video frames from a camera device and need to give them to the opencv for processing, But now my camera is not working(hardware issue). And I need to test the opencv application with a YUV file obtained from another camera device of the same type. So my questions are: How can i give YUV data to opencv? Does opencv support YUV data? From my investigation I could know that The opencv converts the captured frames to a format Mat.So Is there any way to convert the YUV data directly to Mat object and given to the opencv for

How to give YUV data input to Opencv?

北城以北 提交于 2019-12-01 08:27:54
问题 I am a beginner to Opencv. In my new opencv project I have to capture video frames from a camera device and need to give them to the opencv for processing, But now my camera is not working(hardware issue). And I need to test the opencv application with a YUV file obtained from another camera device of the same type. So my questions are: How can i give YUV data to opencv? Does opencv support YUV data? From my investigation I could know that The opencv converts the captured frames to a format

avcodec YUV to RGB

时光怂恿深爱的人放手 提交于 2019-12-01 07:02:20
I'm trying to convert an YUV frame to RGB using libswscale. Here is my code : AVFrame *RGBFrame; SwsContext *ConversionContext; ConversionContext = sws_getCachedContext(NULL, FrameWidth, FrameHeight, AV_PIX_FMT_YUV420P, FrameWidth, FrameHeight, AV_PIX_FMT_RGB24, SWS_BILINEAR, 0, 0, 0); RGBFrame = av_frame_alloc(); avpicture_fill((AVPicture *)RGBFrame, &FillVect[0], AV_PIX_FMT_RGB24, FrameWidth, FrameHeight); sws_scale(ConversionContext, VideoFrame->data, VideoFrame->linesize, 0, VideoFrame->height, RGBFrame->data, RGBFrame->linesize); My program do SEGFAULT on the sws_scale function.

CVOpenGLESTextureCacheCreateTextureFromImage return -6683(kCVReturnPixelBufferNotOpenGLCompatible)

为君一笑 提交于 2019-12-01 03:26:33
I had extract Y U V data from video frame separately and saved them in data[0],data[1],data[2]; The frame size is 640*480; Now I creat the pixelBuffer as below: void *pYUV[3] = {data[0], data[1], data[2]}; size_t planeWidth = {640, 320, 320}; size_t planeHeight = {480, 240, 240}; size_t planeBytesPerRow = {640, 320, 320}; CVReturn renturn = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault, 640, 480, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, nil, 3, pYUV, planeWidth, planeHeight, planeBytesPerRow, nil, nil, nil, &_pixelBuffer); CVPixelBufferLockBaseAddress(_pixelBuffer, 0);