yuv

camera

匿名 (未验证) 提交于 2019-12-03 00:39:02
一、camera成像原理: 景物通过镜头(LENS)生成的光学图像投射到图像传感器(Sensor)表面上,然后转为模 拟的电信号,经过 A/D(模数转换)转换后变为数字图像信号,再送到数字信号处理芯片 (DSP)中加工处理,再通过 IO 接口传输到 CPU 中处理,通过 LCD 就可以看到图像了。 图像传感器(SENSOR)是一种半导体芯片,其表面包含有几十万到几百万的光电二极 管。光电二极管受到光照射时,就会产生电荷。 目前的 SENSOR 类型有两种: 1)CCD(Charge Couple Device), 电荷耦合器件,它是目前高像素类 sensor 中比较成熟 的成像器件,是以一行为单位的电流信号。 2)CMOS(Complementary Metal Oxide Semiconductor),互补金属氧化物半导体。CMOS 的信号是以点为单位的电荷信号,更为敏感,速度也更快,更为省电。 ISP 的性能是决定影像流畅的关键,JPEG encoder 的性能也是关键指标之一。而 JPEG encoder 又分为硬件 JPEG 压缩方式,和软件 RGB 压缩方式。 DSP 控制芯片的作用是:将感光芯片获取的数据及时快速地传到 baseband 中并刷新感 光芯片,因此控制芯片的好坏,直接决定画面品质(比如色彩饱和度、清晰度)与流畅度。 二、常见camera得数据输出格式

Camera.PreviewCallback equivalent in Camera2 API

独自空忆成欢 提交于 2019-12-03 00:17:08
Is there any equivalent for Camera.PreviewCallback in Camera2 from API 21,better than mapping to a SurfaceTexture and pulling a Bitmap ? I need to be able to pull preview data off of the camera as YUV? EmcLIFT You can start from the Camera2Basic sample code from Google. You need to add the surface of the ImageReader as a target to the preview capture request: //set up a CaptureRequest.Builder with the output Surface mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); mPreviewRequestBuilder.addTarget(surface); mPreviewRequestBuilder.addTarget(mImageReader

ffmepg处理10bit 和8bit yuv总结

匿名 (未验证) 提交于 2019-12-03 00:03:02
ffmepg处理yuv视频的系列之三 最近发现数据集里的yuv大部分是8bit,但是有一部分是10bit或者16bit的,默认的yuv播放器打不开,也不利于数据集制作。所以就想用ffmpeg进行处理,记录一下方法。10bit能够容纳更多的色彩,获得更好的动态范围。 ffmpeg里面yuv的格式定义了很多种,比如下面: PIX_FMT_YUV420P9BE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian PIX_FMT_YUV420P9LE, ///< planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian PIX_FMT_YUV420P10BE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian PIX_FMT_YUV420P10LE,///< planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian PIX_FMT_YUV422P10BE,///< planar

YUV 422 , YUV 420 ,YUV 444

亡梦爱人 提交于 2019-12-02 19:35:48
I have for example 4*4 image. I want to extract the Y,U and V components separately. How to do it if the image is YUV 422 ,YUV 420 and YUV444. I am interested in knowing the structure of array how Y,U and V are stored in 422,420 and 444, so that it can be accessed. Sebastian Dressler This site gives you a pretty good overview over the different YUV formats. There's also a pixel-structure given. For clarification: These numbers are for determination of color component subsampling . For instance YUV 444 = 4:4:4 subsampling, meaning that every of the three components (Y, U and V) has the same

FFMPEG: Dumping YUV data into AVFrame structure

前提是你 提交于 2019-12-02 18:38:10
I'm trying to dump a YUV420 data into the AVFrame structure of FFMPEG. From the below link: http://ffmpeg.org/doxygen/trunk/structAVFrame.html , i can derive that i need to put my data into data[AV_NUM_DATA_POINTERS] using linesize [AV_NUM_DATA_POINTERS]. The YUV data i'm trying to dump is YUV420 and the picture size is 416x240. So how do i dump/map this yuv data to AVFrame structures variable? Iknow that linesize represents the stride i.e. i suppose the width of my picture, I have tried with some combinations but do not get the output.I kindly request you to help me map the buffer. Thanks in

Display YUV in OpenGl

别来无恙 提交于 2019-12-02 17:19:46
I am having trouble displaying a raw YUV file that it is in NV12 format. I can display a selected frame, however, it is still mainly in black and white with certain shades of pink and green. Here is how my output looks like Anyways, here is how my program works. (This is done in cocoa/objective-c, but I need your expert advice on program algorithm, not on syntax.) Prior to program execution, the YUV file is stored in a binary file named "test.yuv". The file is in NV12 format, meaning the Y plan is stored first, then the UV plan is interlaced. My file extraction has no problem because I did a

每周总结20130814——Android NDK环境的搭建和使用,YUV420SP格式图像的处理

十年热恋 提交于 2019-12-02 16:51:17
Windows下搭建Android NDK开发环境 更新:比较新的版本的Android NDK都自带基本的GNU工具链,所以不用安装庞大的cygwin或者MSYS了,直接解压NDK然后在Eclipse里配置编译器就可以了。 ———————————————————————————————————— Android NDK需要使用Linux下的make、gdb等开发工具,因此要安装一个模拟的Linux环境。这里选择最常用的cygwin。MSYS应该也可以,不过没有亲自试过,留给有求证精神并鄙视cygwin的庞大和缓慢的Coder去验证! cygwin有自己的安装器,相当于Linux发行版下的包管理器,用来管理软件。打开后选择从网络安装,选择一个合适的镜像,偷懒的话直接把Devel分类下的软件全部选上,点击下一步后这个包管理器会自己解决各种乱七八糟的依赖关系,给你下载安装几个G的软件包。如果有洁癖或者网络不给力,可以自己慢慢选择要装哪些软件,这样装的东西会少很多。顺便吐槽一句,cygwin的这个图形化包管理器体验真是渣,快装完的时候有个选项没看清,手贱点了一下上一步,我再次点击下一步的时候它就给我卸载又重新安装配置了一遍,于是又多花了十几分钟。。。有没有省事点的像aptitude这样牛气哄哄的工具? 装完cygwin后还要简单地配置一下。首先请下载最新版本的Android NDK并解压

Save frame from TangoService_connectOnFrameAvailable

旧城冷巷雨未停 提交于 2019-12-02 11:58:36
How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this: static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer) { ... std::ofstream fp; fp.open(imagefile, std::ios::out | std::ios::binary ); int offset = 0; for(int i = 0; i < buffer->height*2 + 1; i++) { fp.write((char*)(buffer->data + offset), buffer->width); offset +

Reading YUV images in C

拈花ヽ惹草 提交于 2019-12-02 09:27:02
问题 How to read any yuv image? How can the dimensions of an YUV image be passed for reading to a buffer? 回答1: Usually, when people talk about YUV they talk about YUV 4:2:0. Your reference to any YUV image is misleading, because there are a number of different formats, and each is handled differently. For example, raw YUV 4:2:0 (by convention, files with an extension .yuv ) doesn't contain any dimension data; whereas y4m files typically do. So you really need to know what sort of image you're

Android Renderscript - Rotate YUV data in Renderscript

故事扮演 提交于 2019-12-02 08:08:15
Based on the discussion I had at Camera2 api Imageformat.yuv_420_888 results on rotated image , I wanted to know how to adjust the lookup done via rsGetElementAt_uchar methods so that the YUV data is rotated by 90 degree. I also have a project like the HdrViewfinder provided by Google. The problem is that the output is in landscape because the output surface used as target surface is connected to the yuv allocation which does not care if the device is in landscape or portrait mode. But I want to adjust the code so that it is in portrait mode. Therefore, I took a custom YUVToRGBA renderscript