使用AVFoundation混合图像和视频

6
我正在尝试使用AVFoundation在Mac上将图像拼接到现有视频中,以创建新的视频文件。
到目前为止,我已经阅读了苹果文档示例,包括ASSETWriterInput for making Video from UIImages on Iphone Issues, Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool, AVFoundation Tutorial: Adding Overlays and Animations to Videos和一些其他SO链接。这些对我有时非常有用,但我的问题是,我不仅想创建一个静态水印或重叠层,而且还想在视频的各个部分之间插入图像。
到目前为止,我已经成功获取了视频并创建了空白部分,以便插入这些图像,并导出视频。
我的问题是如何将图片插入到这些空白区域中。我唯一能想到的可行方法是创建一系列图层,并在正确的时间更改它们的不透明度来实现动画效果,但我似乎无法让动画正常工作。
下面的代码是我用来创建视频片段和图层动画的。
    //https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW7
    
    // let's start by making our video composition
    AVMutableComposition* mutableComposition = [AVMutableComposition composition];
    AVMutableCompositionTrack* mutableCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
    
    AVMutableVideoComposition* mutableVideoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:gVideoAsset];
    
    // if the first point's frame doesn't start on 0
    if (gFrames[0].startTime.value != 0)
    {
        DebugLog("Inserting vid at 0");
        // then add the video track to the composition track with a time range from 0 to the first point's startTime
        [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, gFrames[0].startTime) ofTrack:gVideoTrack atTime:kCMTimeZero error:&gError];
        
    }
    
    if(gError)
    {
        DebugLog("Error inserting original video segment");
        GetError();
    }
    
    // create our parent layer and video layer
    CALayer* parentLayer = [CALayer layer];
    CALayer* videoLayer = [CALayer layer];
    
    parentLayer.frame = CGRectMake(0, 0, 1280, 720);
    videoLayer.frame = CGRectMake(0, 0, 1280, 720);
    
    [parentLayer addSublayer:videoLayer];
    
    // create an offset value that should be added to each point where a new video segment should go
    CMTime timeOffset = CMTimeMake(0, 600);
    
    // loop through each additional frame
    for(int i = 0; i < gFrames.size(); i++)
    {
    // create an animation layer and assign it's content to the CGImage of the frame
        CALayer* Frame = [CALayer layer];
        Frame.contents = (__bridge id)gFrames[i].frameImage;
        Frame.frame = CGRectMake(0, 720, 1280, -720);
        
        DebugLog("inserting empty time range");
        // add frame point to the composition track starting at the point's start time
        // insert an empty time range for the duration of the frame animation
        [mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];
        
        // update the time offset by the duration
        timeOffset = CMTimeAdd(timeOffset, gFrames[i].duration);
        
        // make the layer completely transparent
        Frame.opacity = 0.0f;
        
        // create an animation for setting opacity to 0 on start
        CABasicAnimation* frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero;
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // create an animation for setting opacity to 1
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:1.0];
        frameAnim.toValue = [NSNumber numberWithFloat:1.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].startTime);
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // create an animation for setting opacity to 0
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;
        
        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];
        
        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].endTime);
        frameAnim.speed = 1.0f;
        
        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];
        
        // add the frame layer to our parent layer
        [parentLayer addSublayer:Frame];
        
        gError = nil;
        
        // if there's another point after this one
        if( i < gFrames.size()-1)
        {
            // add our video file to the composition with a range of this point's end and the next point's start
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime,
                            CMTimeMake(gFrames[i+1].startTime.value - gFrames[i].startTime.value, 600))
                            ofTrack:gVideoTrack
                            atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
            
        }
        // else just add our video file with a range of this points end point and the videos duration
        else
        {
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeSubtract(gVideoAsset.duration, gFrames[i].startTime)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
        }
        
        if(gError)
        {
            char errorMsg[256];
            sprintf(errorMsg, "Error inserting original video segment at: %d", i);
            DebugLog(errorMsg);
            GetError();
        }
    }

现在,在该段中,帧的不透明度设置为0.0f,但是当我将其设置为1.0f时,它只是在整个持续时间内将这些帧中的最后一个放置在视频的顶部。

之后,使用AVAssetExportSession导出视频,如下所示:

mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
    
    // create a layer instruction for our newly created animation tool
    AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:gVideoTrack];
    
    AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    [instruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [mutableComposition duration])];
    [layerInstruction setOpacity:1.0f atTime:kCMTimeZero];
    [layerInstruction setOpacity:0.0f atTime:mutableComposition.duration];
    instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
    
    // set the instructions on our videoComposition
    mutableVideoComposition.instructions = [NSArray arrayWithObject:instruction];
    
    // export final composition to a video file
    
    // convert the videopath into a url for our AVAssetWriter to create a file at
    NSString* vidPath = CreateNSString(outputVideoPath);
    NSURL* vidURL = [NSURL fileURLWithPath:vidPath];
    
    AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPreset1280x720];
    
    exporter.outputFileType = AVFileTypeMPEG4;
    
    exporter.outputURL = vidURL;
    exporter.videoComposition = mutableVideoComposition;
    exporter.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
    
    // Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
    [exporter exportAsynchronouslyWithCompletionHandler:^{
        dispatch_async(dispatch_get_main_queue(), ^{
            if (exporter.status == AVAssetExportSessionStatusCompleted)
            {
                DebugLog("!!!file created!!!");
                _Close();
            }
            else if(exporter.status == AVAssetExportSessionStatusFailed)
            {
                DebugLog("failed damn");
                DebugLog(cStringCopy([[[exporter error] localizedDescription] UTF8String]));
                DebugLog(cStringCopy([[[exporter error] description] UTF8String]));
                _Close();
            }
            else
            {
                DebugLog("NoIdea");
                _Close();
            }
        });
    }];
    
    
}

我有一种感觉动画没有启动,但我不确定。我这样将图像数据拼接到视频中的方式正确吗?
非常感谢您的帮助。
1个回答

4
我用其他方法解决了我的问题。动画路线不起作用,所以我的解决方案是将所有可插入的图像编译成一个临时视频文件,并使用该视频将图像插入到最终输出视频中。
从我最初发布的第一个链接ASSETWriterInput for making Video from UIImages on Iphone Issues开始,我创建了以下函数来创建我的临时视频。
void CreateFrameImageVideo(NSString* path)
{
    NSLog(@"Creating writer at path %@", path);
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4
                                                              error:&error];

    NSLog(@"Creating video codec settings");
    NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   [NSNumber numberWithInt:gVideoTrack.estimatedDataRate/*128000*/], AVVideoAverageBitRateKey,
                                   [NSNumber numberWithInt:gVideoTrack.nominalFrameRate],AVVideoMaxKeyFrameIntervalKey,
                                   AVVideoProfileLevelH264MainAutoLevel, AVVideoProfileLevelKey,
                                   nil];

    NSLog(@"Creating video settings");
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   codecSettings,AVVideoCompressionPropertiesKey,
                                   [NSNumber numberWithInt:1280], AVVideoWidthKey,
                                   [NSNumber numberWithInt:720], AVVideoHeightKey,
                                   nil];

    NSLog(@"Creating writter input");
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                        assetWriterInputWithMediaType:AVMediaTypeVideo
                                        outputSettings:videoSettings] retain];

    NSLog(@"Creating adaptor");
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                     sourcePixelBufferAttributes:nil];

    [videoWriter addInput:writerInput];

    NSLog(@"Starting session");
    //Start a session:
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];


    CMTime timeOffset = kCMTimeZero;//CMTimeMake(0, 600);

    NSLog(@"Video Width %d, Height: %d, writing frame video to file", gWidth, gHeight);

    CVPixelBufferRef buffer;

    for(int i = 0; i< gAnalysisFrames.size(); i++)
    {
        while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
            NSLog(@"Waiting inside a loop");
            NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
            [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
        }

        //Write samples:
        buffer = pixelBufferFromCGImage(gAnalysisFrames[i].frameImage, gWidth, gHeight);

        [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];



        timeOffset = CMTimeAdd(timeOffset, gAnalysisFrames[i].duration);
    }

    while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
        NSLog(@"Waiting outside a loop");
        NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
        [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
    }

    buffer = pixelBufferFromCGImage(gAnalysisFrames[gAnalysisFrames.size()-1].frameImage, gWidth, gHeight);
    [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];

    NSLog(@"Finishing session");
    //Finish the session:
    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:timeOffset];
    BOOL successfulWrite = [videoWriter finishWriting];

    // if we failed to write the video
    if(!successfulWrite)
    {

        NSLog(@"Session failed with error: %@", [[videoWriter error] description]);

        // delete the temporary file created
        NSFileManager *fileManager = [NSFileManager defaultManager];
        if ([fileManager fileExistsAtPath:path]) {
            NSError *error;
            if ([fileManager removeItemAtPath:path error:&error] == NO) {
                NSLog(@"removeItemAtPath %@ error:%@", path, error);
            }
        }
    }
    else
    {
        NSLog(@"Session complete");
    }

    [writerInput release];

}

视频创建完成后,将其作为 AVAsset 加载,然后提取其轨道,接着通过替换原始帖子中第一个代码块中的以下行来插入视频。
[mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];

使用:

[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(timeOffset,gAnalysisFrames[i].duration)
                                     ofTrack:gFramesTrack
                                     atTime:CMTimeAdd(gAnalysisFrames[i].startTime, timeOffset) error:&gError];

gFramesTrack为从临时帧视频创建的AVAssetTrack。

所有与CALayer和CABasicAnimation对象相关的代码都已删除,因为它们无效。

虽然不是最优雅的解决方案,但至少有效。希望有人会发现这个有用。

此代码还可在iOS设备上使用(已在iPad 3上进行测试)

副注:第一个帖子中的DebugLog函数只是一个回调函数,用于打印日志消息,如果需要可以替换为NSLog()调用。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接