iOS 11 Objective-C - 使用AVAssetWriterInputPixelBufferAdaptor处理ReplayKit的图像缓冲区

5
我正在尝试使用ReplayKit录制我的应用程序屏幕,录制视频时裁剪掉一些部分。但是效果并不好。
ReplayKit会捕获整个屏幕,所以我决定从ReplayKit接收每一帧(作为CMSampleBuffer通过startCaptureWithHandler),在那里进行裁剪,并通过AVAssetWriterInputPixelBufferAdaptor将其馈送到视频编写器。但是,在裁剪图像缓冲区之前,我遇到了麻烦。
这是我正在使用的能够记录整个屏幕的代码:
// Starts recording with a completion/error handler
-(void)startRecordingWithHandler: (RPHandler)handler
{
    // Sets up AVAssetWriter that will generate a video file from the recording.
    self.writer = [AVAssetWriter assetWriterWithURL:self.outputFileURL
                                           fileType:AVFileTypeQuickTimeMovie
                                              error:nil];

    NSDictionary* outputSettings =
    @{
      AVVideoWidthKey  : @(screen.size.width),   // The whole width of the entire screen.
      AVVideoHeightKey : @(screen.size.height),  // The whole height of the entire screen.
      AVVideoCodecKey  : AVVideoCodecTypeH264,
      };

    // Sets up AVAssetWriterInput that will feed ReplayKit's frame buffers to the writer.
    self.videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                         outputSettings:outputSettings];

    // Lets it know that the input will be realtime using ReplayKit.
    [self.videoInput setExpectsMediaDataInRealTime:YES];

    NSDictionary* sourcePixelBufferAttributes =
    @{
      (NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
      (NSString*) kCVPixelBufferWidthKey          : @(screen.size.width),
      (NSString*) kCVPixelBufferHeightKey         : @(screen.size.height),
      };

    // Adds the video input to the writer.
    [self.writer addInput:self.videoInput];

    // Sets up ReplayKit itself.
    self.recorder = [RPScreenRecorder sharedRecorder];

    // Arranges the pipleline from ReplayKit to the input.
    RPBufferHandler bufferHandler = ^(CMSampleBufferRef sampleBuffer, RPSampleBufferType bufferType, NSError* error) {
        [self captureSampleBuffer:sampleBuffer withBufferType:bufferType];
    };

    RPHandler errorHandler = ^(NSError* error) {
        if (error) handler(error);
    };

    // Starts ReplayKit's recording session. 
    // Sample buffers will be sent to `captureSampleBuffer` method.
    [self.recorder startCaptureWithHandler:bufferHandler completionHandler:errorHandler];
}

// Receives a sample buffer from ReplayKit every frame.
-(void)captureSampleBuffer:(CMSampleBufferRef)sampleBuffer withBufferType:(RPSampleBufferType)bufferType
{
    // Uses a queue in sync so that the writer-starting logic won't be invoked twice.
    dispatch_sync(dispatch_get_main_queue(), ^{
        // Starts the writer if not started yet. We do this here in order to get the proper source time later.
        if (self.writer.status == AVAssetWriterStatusUnknown) {
            [self.writer startWriting];
            return;
        }

        // Receives a sample buffer from ReplayKit.
        switch (bufferType) {
            case RPSampleBufferTypeVideo:{
                // Initializes the source time when a video frame buffer is received the first time.
                // This prevents the output video from starting with blank frames.
                if (!self.startedWriting) {
                    NSLog(@"self.writer startSessionAtSourceTime");
                    [self.writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
                    self.startedWriting = YES;
                }

                // Appends a received video frame buffer to the writer.
                [self.input append:sampleBuffer];
                break;
            }
        }
    });
}

// Stops the current recording session, and saves the output file to the user photo album.
-(void)stopRecordingWithHandler:(RPHandler)handler
{
    // Closes the input.
    [self.videoInput markAsFinished];

    // Finishes up the writer. 
    [self.writer finishWritingWithCompletionHandler:^{
        handler(self.writer.error);

        // Saves the output video to the user photo album.
        [[PHPhotoLibrary sharedPhotoLibrary] performChanges: ^{ [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL: self.outputFileURL]; }
                                          completionHandler: ^(BOOL s, NSError* e) { }];
    }];

    // Stops ReplayKit's recording.
    [self.recorder stopCaptureWithHandler:nil];
}

每个来自ReplayKit的样本缓冲区将直接被馈送到编写器(在captureSampleBuffer方法中),因此记录了整个屏幕。

然后,我用AVAssetWriterInputPixelBufferAdaptor替换了该部分,其逻辑完全相同,可以正常工作:

...
case RPSampleBufferTypeVideo:{
    ... // Initializes source time.

    // Gets the timestamp of the sample buffer.
    CMTime time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

    // Extracts the pixel image buffer from the sample buffer.
    CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    // Appends a received sample buffer as an image buffer to the writer via the adaptor.
    [self.videoAdaptor appendPixelBuffer:imageBuffer withPresentationTime:time];
    break;
}
...

适配器设置如下:
NSDictionary* sourcePixelBufferAttributes =
@{
  (NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
  (NSString*) kCVPixelBufferWidthKey          : @(screen.size.width),
  (NSString*) kCVPixelBufferHeightKey         : @(screen.size.height),
  };

self.videoAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.videoInput
                                                                                     sourcePixelBufferAttributes:sourcePixelBufferAttributes];

所以管道正在工作。 然后,我在主内存中创建了图像缓冲区的硬拷贝,并将其提供给适配器:
...
case RPSampleBufferTypeVideo:{
    ... // Initializes source time.

    // Gets the timestamp of the sample buffer.
    CMTime time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

    // Extracts the pixel image buffer from the sample buffer.
    CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    // Hard-copies the image buffer.
    CVPixelBufferRef copiedImageBuffer = [self copy:imageBuffer];

    // Appends a received video frame buffer to the writer via the adaptor.
    [self.adaptor appendPixelBuffer:copiedImageBuffer withPresentationTime:time];
    break;
}
...

// Hard-copies the pixel buffer.
-(CVPixelBufferRef)copy:(CVPixelBufferRef)inputBuffer
{
    // Locks the base address of the buffer
    // so that GPU won't change the data until unlocked later.
    CVPixelBufferLockBaseAddress(inputBuffer, 0); //-------------------------------

    char* baseAddress = (char*)CVPixelBufferGetBaseAddress(inputBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(inputBuffer);
    size_t width = CVPixelBufferGetWidth(inputBuffer);
    size_t height = CVPixelBufferGetHeight(inputBuffer);
    size_t length = bytesPerRow * height;

    // Mallocs the same length as the input buffer for copying.
    char* outputAddress = (char*)malloc(length);

    // Copies the input buffer's data to the malloced space.
    for (int i = 0; i < length; i++) {
        outputAddress[i] = baseAddress[i];
    }

    // Create a new image buffer using the copied data.
    CVPixelBufferRef outputBuffer;
    CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
                                 width,
                                 height,
                                 kCVPixelFormatType_32BGRA,
                                 outputAddress,
                                 bytesPerRow,
                                 &releaseCallback, // Releases the malloced space.
                                 NULL,
                                 NULL,
                                 &outputBuffer);

    // Unlocks the base address of the input buffer
    // So that GPU can restart using the data.
    CVPixelBufferUnlockBaseAddress(inputBuffer, 0); //-------------------------------

    return outputBuffer;
}

// Releases the malloced space.
void releaseCallback(void *releaseRefCon, const void *baseAddress)
{
    free((void *)baseAddress);
}

这不起作用 - 保存的视频看起来会像右侧截图一样:

broken

似乎每行的字节数和颜色格式都不正确。我已经进行了以下研究和实验,但均无效:
  • 将每行的字节数硬编码为4 * width ->“访问错误”。
  • 使用intdouble代替char ->一些奇怪的调试器终止异常。
  • 使用其他图像格式 ->要么是“不支持”,要么是访问错误。
此外,releaseCallback从未被调用--内存将在录制10秒后耗尽。
从输出结果来看,可能的原因是什么?

如果您不复制PixelBufferRef会怎样? - Talha Ahmad Khan
2个回答

0
你可以先将视频保存为原始格式。 然后使用 AVMutableComposition 类,通过添加指令和图层指令来裁剪视频。

0
在我的情况下,Replaykit使用420YpCbCr8BiPlanarFullRange格式调用sampleBuffer。 不是RBGA格式。你需要处理两个平面。你的截图显示Y平面在上面,UV平面在下面。UV平面的大小是Y平面的一半。 为了获取2个平面的基地址。
CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1)

你还需要通过这些API获取每个平面的宽度、高度和每行的字节数。

CVPixelBufferGetWithOfPlane(imageBuffer, 0 or 1)
CVPixelBufferGetHeightOfPlane(imageBuffer, 0 or 1)
CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0 or 1)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接