如何在AVMutableComposition中同时为多个视频层添加动画效果?

8
我正在编写一段代码,可以从多个图像和多个视频在iOS设备上生成幻灯片演示视频。我已经能够使用单个视频和多个图像完成此操作,但我无法弄清如何将其改进为多个视频。
这里是我能够使用一个视频和两个图像生成的样本视频
这是主要程序,用于准备出口商。
// Prepare the temporary location to store generated video
NSURL * urlAsset = [NSURL fileURLWithPath:[StoryMaker tempFilePath:@"mov"]];

// Prepare composition and _exporter
AVMutableComposition *composition = [AVMutableComposition composition];
AVAssetExportSession* exporter = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = urlAsset;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = [self _addVideo:composition time:timeVideo];

这里是_addVideo:time:方法,它创建了videoLayer。
-(AVVideoComposition*) _addVideo:(AVMutableComposition*)composition time:(CMTime)timeVideo {
    AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
    videoComposition.renderSize = _sizeVideo;
    videoComposition.frameDuration = CMTimeMake(1,30); // 30fps
    AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

    [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,timeVideo) ofTrack:_baseVideoTrack atTime:kCMTimeZero error:nil];

    // Prepare the parent layer
    CALayer *parentLayer = [CALayer layer];
    parentLayer.backgroundColor = [UIColor blackColor].CGColor;
    parentLayer.frame = CGRectMake(0, 0, _sizeVideo.width, _sizeVideo.height);

    // Prepare images parent layer
    CALayer *imageParentLayer = [CALayer layer];
    imageParentLayer.frame = CGRectMake(0, 0, _sizeVideo.width, _sizeVideo.height);
    [parentLayer addSublayer:imageParentLayer];

    // Specify the perspecrtive view
    CATransform3D perspective = CATransform3DIdentity;
    perspective.m34 = -1.0 / imageParentLayer.frame.size.height;
    imageParentLayer.sublayerTransform = perspective;

    // Animations
    _beginTime = 1E-10;
    _endTime = CMTimeGetSeconds(timeVideo);

    CALayer* videoLayer = [self _addVideoLayer:imageParentLayer];
    [self _addAnimations:imageParentLayer time:timeVideo];

    videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool      videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

    // Prepare the instruction
    AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    {
        instruction.timeRange = CMTimeRangeMake(kCMTimeZero, timeVideo);
        AVAssetTrack *videoTrack = [[composition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
        AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
        [layerInstruction setTransform:_baseVideoTrack.preferredTransform atTime:kCMTimeZero];
        instruction.layerInstructions = @[layerInstruction];
    }
    videoComposition.instructions = @[instruction];
    return videoComposition;
} 

_addAnimation:time:方法添加图像层,并安排所有层的动画,包括_videoLayer。

到目前为止一切都工作正常。

然而,我无法弄清如何将第二个视频添加到此幻灯片演示文稿中。

AVFoundation编程指南中的示例使用多个视频合成指令(AVMutableVideoCompositionInstruction)将两个视频组合在一起,但它只是将它们渲染成一个CALayer对象,该对象在videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:inLayer:方法(of AVVideoCompositionCoreAnimationTool)中指定。

我想将两个视频轨道渲染成两个独立的图层(layer1和layer2),并分别对它们进行动画处理,就像我正在处理与图像相关联的图层一样。


你最终解决了这个问题吗? - crgt
没有,我认为这是AVFoundation的限制。 - Satoshi Nakajima
你实际上在哪里将图像添加到图像层? - Crashalot
你能分享你的代码吗? - Mayank Purwar
我有同样的需求,但目前核心动画工具只接受一个视频层,而我想在不同的层中播放视频。你成功了吗?如果是,请让我们都知道,你是如何实现的。谢谢。 - Pankaj Kulkarni
1个回答

7
我也遇到了这个问题,想要同时播放多个视频。我发现AVVideoCompositionCoreAnimationTool可以接受多个视频层,但是它们都是同一个视频的多个实例。因此,我的解决方法是制作一个包含两个视频并排的大视频,并使用蒙版在每个图层中显示各自的视频。
以下是视频层的示例图片: videos1 下面是大致的蒙版效果,用于分别显示每个视频: videos2 enter image description here 以下是代码示例,您只需要准备两个视频即可运行:
class ViewController: UIViewController {

var myurl: URL?

override func viewDidLoad() {
    super.viewDidLoad()


}


@IBAction func newMerge(_ sender: Any) {

    print("making vid")

    let path = Bundle.main.path(forResource: "sample_video", ofType:"mp4")
    let fileURL = NSURL(fileURLWithPath: path!)

    let vid = AVURLAsset(url: fileURL as URL)

    let path2 = Bundle.main.path(forResource: "example2", ofType:"mp4")
    let fileURL2 = NSURL(fileURLWithPath: path2!)

    let vid2 = AVURLAsset(url: fileURL2 as URL)

    newoverlay(video: vid, withSecondVideo: vid2)

}


func newoverlay(video firstAsset: AVURLAsset, withSecondVideo secondAsset: AVURLAsset) {


    // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
    let mixComposition = AVMutableComposition()

    // 2 - Create two video tracks
    guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video,
                                                          preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    do {
        try firstTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: firstAsset.duration),
                                       of: firstAsset.tracks(withMediaType: .video)[0],
                                       at: CMTime.zero)
    } catch {
        print("Failed to load first track")
        return
    }

    guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video,
                                                           preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
    do {
        try secondTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: secondAsset.duration),
                                        of: secondAsset.tracks(withMediaType: .video)[0],
                                        at: CMTime.zero)
    } catch {
        print("Failed to load second track")
        return
    }




    // Watermark Effect
    let width: CGFloat = firstTrack.naturalSize.width + secondTrack.naturalSize.width
    let height: CGFloat = CGFloat.maximum(firstTrack.naturalSize.height, secondTrack.naturalSize.height)


    //bg layer
    let bglayer = CALayer()
    bglayer.frame = CGRect(x: 0, y: 0, width: width, height: height)
    bglayer.backgroundColor = UIColor.blue.cgColor

    let box1 = CALayer()
    box1.frame = CGRect(x: 0, y: 0, width: firstTrack.naturalSize.width, height: firstTrack.naturalSize.height - 1)
    box1.backgroundColor = UIColor.red.cgColor
    box1.masksToBounds = true

    let timeInterval: CFTimeInterval = 1
    let scaleAnimation = CABasicAnimation(keyPath: "transform.scale")
    scaleAnimation.fromValue = 1.0
    scaleAnimation.toValue = 1.1
    scaleAnimation.autoreverses = true
    scaleAnimation.isRemovedOnCompletion = false
    scaleAnimation.duration = timeInterval
    scaleAnimation.repeatCount=Float.infinity
    scaleAnimation.beginTime = AVCoreAnimationBeginTimeAtZero
    box1.add(scaleAnimation, forKey: nil)

    let box2 = CALayer()
    box2.frame = CGRect(x: firstTrack.naturalSize.width + 100, y: 0, width: secondTrack.naturalSize.width, height: secondTrack.naturalSize.height)
    box2.backgroundColor = UIColor.green.cgColor
    box2.masksToBounds = true

    let videolayer = CALayer()
    videolayer.frame = CGRect(x: 0, y: -(height - firstTrack.naturalSize.height), width: width + 2, height: height + 2)
    videolayer.backgroundColor = UIColor.clear.cgColor

    let videolayer2 = CALayer()
    videolayer2.frame = CGRect(x: -firstTrack.naturalSize.width, y: 0, width: width, height: height)
    videolayer2.backgroundColor = UIColor.clear.cgColor

    let parentlayer = CALayer()
    parentlayer.frame = CGRect(x: 0, y: 0, width: width, height: height)
    parentlayer.addSublayer(bglayer)
    parentlayer.addSublayer(box1)
    parentlayer.addSublayer(box2)
    box1.addSublayer(videolayer)
    box2.addSublayer(videolayer2)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
    layercomposition.renderSize = CGSize(width: width, height: height)
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer, videolayer2], in: parentlayer)

    // 2.1
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: CMTimeAdd(firstAsset.duration, secondAsset.duration))

    // 2.2 - this is where the 2 videos get combined into one large one.
    let firstInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
    let move = CGAffineTransform(translationX: 0, y: 0)
    firstInstruction.setTransform(move, at: CMTime.zero)

    let secondInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
    let move2 = CGAffineTransform(translationX: firstTrack.naturalSize.width, y: 0)
    secondInstruction.setTransform(move2, at: CMTime.zero)

    // 2.3
    mainInstruction.layerInstructions = [firstInstruction, secondInstruction]
    //let mainComposition = AVMutableVideoComposition()
    layercomposition.instructions = [mainInstruction]
    layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)


    layercomposition.renderSize = CGSize(width: width, height: height)
    mainInstruction.backgroundColor = UIColor.clear.cgColor


    //  create new file to receive data
    let dirPaths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
    let docsDir = dirPaths[0] as NSString
    let movieFilePath = docsDir.appendingPathComponent("result.mov")
    let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)

    // use AVAssetExportSession to export video
    let assetExport = AVAssetExportSession(asset: mixComposition, presetName:AVAssetExportPresetHighestQuality)
    assetExport?.outputFileType = AVFileType.mov
    assetExport?.videoComposition = layercomposition

    // Check exist and remove old file
    FileManager.default.removeItemIfExisted(movieDestinationUrl as URL)

    assetExport?.outputURL = movieDestinationUrl as URL
    assetExport?.exportAsynchronously(completionHandler: {
        switch assetExport!.status {
        case AVAssetExportSession.Status.failed:
            print("failed")
            print(assetExport?.error ?? "unknown error")
        case AVAssetExportSession.Status.cancelled:
            print("cancelled")
            print(assetExport?.error ?? "unknown error")
        default:
            print("Movie complete")

            self.myurl = movieDestinationUrl as URL

            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: movieDestinationUrl as URL)
            }) { saved, error in
                if saved {
                    print("Saved")
                }
            }

            self.playVideo()

        }
    })
}



func playVideo() {
    let player = AVPlayer(url: myurl!)
    let playerLayer = AVPlayerLayer(player: player)
    playerLayer.frame = self.view.bounds
    self.view.layer.addSublayer(playerLayer)
    player.play()
    print("playing...")
}



}


extension FileManager {
func removeItemIfExisted(_ url:URL) -> Void {
    if FileManager.default.fileExists(atPath: url.path) {
        do {
            try FileManager.default.removeItem(atPath: url.path)
        }
        catch {
            print("Failed to delete file")
        }
    }
}
}

你的回答对我非常有帮助,我希望能在 LinkedIn 上与您联系。 - Tancrede Chazallet
1
天啊,谁能想到在 postProcessingAsVideoLayer 后面加上一个 's' 就解决了我所有的问题。 - yspreen

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接