问题
I've been struggling to figure out what the problem is with my code. I'm creating a planar CVPixelBufferRef
to write to an AVAssetWriter
. This pixel buffer is created manually through some other process (i.e., I'm not getting these samples from the camera or anything like that). On the iOS Simulator, it has no problem appending the samples and creating a valid output movie.
But on the device, it immediately fails at the first sample and provides less than useless error information:
AVAssetWriterError: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x12fd2c670 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed}
I'm very new to pixel formats, and I wouldn't be surprised if I've somehow created invalid pixel buffers, but the fact that it works just fine on the Simulator (i.e., OS X) leaves me confused.
Here's my code:
const int pixelBufferWidth = img->get_width();
const int pixelBufferHeight = img->get_height();
size_t planeWidths[3];
size_t planeHeights[3];
size_t planeBytesPerRow[3];
void* planeBaseAddresses[3];
for (int c=0;c<3;c++) {
int stride;
const uint8_t* p = de265_get_image_plane(img, c, &stride);
int width = de265_get_image_width(img,c);
int height = de265_get_image_height(img, c);
planeWidths[c] = width;
planeHeights[c] = height;
planeBytesPerRow[c] = stride;
planeBaseAddresses[c] = const_cast<uint8_t*>(p);
}
void* descriptor = calloc(1, sizeof(CVPlanarPixelBufferInfo_YCbCrPlanar));
CVPixelBufferRef pixelBufferRef;
CVReturn result = CVPixelBufferCreateWithPlanarBytes(NULL,
pixelBufferWidth,
pixelBufferHeight,
kCVPixelFormatType_420YpCbCr8Planar,
NULL,
0,
3,
planeBaseAddresses,
planeWidths,
planeHeights,
planeBytesPerRow,
&pixelBufferReleaseCallback,
NULL,
NULL,
&pixelBufferRef);
CMFormatDescriptionRef formatDescription = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBufferRef, &formatDescription);
if (assetWriter == nil) {
// ... create output file path in Caches directory
assetWriter = [AVAssetWriter assetWriterWithURL:fileOutputURL fileType:AVFileTypeMPEG4 error:nil];
NSDictionary *videoSettings = @{AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : @(pixelBufferWidth),
AVVideoHeightKey : @(pixelBufferHeight),
AVVideoCompressionPropertiesKey : @{AVVideoMaxKeyFrameIntervalKey : @1}};
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings sourceFormatHint:formatDescription];
[assetWriter addInput:assetWriterInput];
NSDictionary *pixelBufferAttributes = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8Planar),
(id)kCVPixelBufferWidthKey : @(pixelBufferWidth),
(id)kCVPixelBufferHeightKey : @(pixelBufferHeight)};
pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
}
samplePresentationTime = CMTimeMake(frameIndex++, framesPerSecond);
BOOL success = [pixelBufferAdaptor appendPixelBuffer:pixelBufferRef withPresentationTime:samplePresentationTime];
success
is always NO
, and the error from the asset writer is what I pasted above.
I also tried creating the sample buffers manually instead of using AVAssetWriterInputPixelBufferAdaptor
just to eliminate that as a possible problem, but the results are the same.
Again, this does work on the Simulator, so I know my pixel buffers do contain the right data.
Also, I verified that I can write to the file destination. I tried creating a dummy file at that location, and it succeeded.
I would like to avoid converting my buffer to RGB since I shouldn't have to. I have Y'CbCr buffers to begin with, and I want to just encode them into an H.264 video, which supports Y'CbCr.
The source that is creating these buffers states the following:
The image is currently always 3-channel YCbCr, with 4:2:0 chroma.
I confirmed that it always enters its loop logic that deals with 8-bit YUV channels.
What am I doing wrong?
回答1:
So, I can't confirm this officially, but it appears that AVAssetWriter
doesn't like 3-plane pixel formats (i.e., kCVPixelFormatType_420YpCbCr8Planar
) on iOS. On OS X, it appears to work with pretty much anything. When I converted my 3-plane buffers to a bi-planar pixel buffer format, this worked on iOS. This is unsurprising since the camera natively captures in kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
pixel format, so AV Foundation would likely also work with that format.
Still, it'd be nice if I didn't have to do this explicit conversion step myself, though vImageConvert_PlanarToChunky8
helps to interleave the Cb and Cr planes into a single plane.
来源:https://stackoverflow.com/questions/34099842/why-wont-avfoundation-accept-my-planar-pixel-buffers-on-an-ios-device