AVFoundation导出方向错误

11
我尝试将图像和视频合并。它们已经合并并导出,但是它们是侧着旋转的。抱歉粘贴了大量代码。我看到了一些关于将“compositionVideoTrack.preferredTransform”应用于变换的答案,但是那没有任何作用。添加到“AVMutableVideoCompositionInstruction”也没有任何作用。我觉得这个区域是事情开始出错的地方,在这里:
// I feel like this loading here is the problem
        let videoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]

        // because it makes our parentLayer and videoLayer sizes wrong
        let videoSize       = videoTrack.naturalSize

        // this is returning 1920x1080, so it is rotating the video
        print("\(videoSize.width) , \(videoSize.height)")

那么到这里,我们的帧大小在方法的其余部分中是错误的。现在当我们尝试去创建叠加图像层时,帧是不正确的:

    let aLayer = CALayer()
    aLayer.contents = UIImage(named: "OverlayTestImageOverlay")?.CGImage
    aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
    aLayer.opacity = 1

这是我的完整方法。

  func combineImageVid() {

        let path = NSBundle.mainBundle().pathForResource("SampleMovie", ofType:"MOV")
        let fileURL = NSURL(fileURLWithPath: path!)

        let videoAsset = AVURLAsset(URL: fileURL)
        let mixComposition = AVMutableComposition()

        let compositionVideoTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

        var clipVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)

        do {
            try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: clipVideoTrack[0], atTime: kCMTimeZero)
        }
        catch _ {
            print("failed to insertTimeRange")
        }


        compositionVideoTrack.preferredTransform = videoAsset.preferredTransform

        // I feel like this loading here is the problem
        let videoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]

        // because it makes our parentLayer and videoLayer sizes wrong
        let videoSize       = videoTrack.naturalSize

        // this is returning 1920x1080, so it is rotating the video
        print("\(videoSize.width) , \(videoSize.height)")

        let aLayer = CALayer()
        aLayer.contents = UIImage(named: "OverlayTestImageOverlay")?.CGImage
        aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)
        aLayer.opacity = 1


        let parentLayer     = CALayer()
        let videoLayer      = CALayer()

        parentLayer.frame   = CGRectMake(0, 0, videoSize.width, videoSize.height)
        videoLayer.frame    = CGRectMake(0, 0, videoSize.width, videoSize.height)

        parentLayer.addSublayer(videoLayer)
        parentLayer.addSublayer(aLayer)


        let videoComp = AVMutableVideoComposition()
        videoComp.renderSize = videoSize
        videoComp.frameDuration = CMTimeMake(1, 30)
        videoComp.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)

        let instruction = AVMutableVideoCompositionInstruction()

        instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)

        let mixVideoTrack = mixComposition.tracksWithMediaType(AVMediaTypeVideo)[0]
        mixVideoTrack.preferredTransform = CGAffineTransformMakeRotation(CGFloat(M_PI * 90.0 / 180))

        let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mixVideoTrack)
        instruction.layerInstructions = [layerInstruction]
        videoComp.instructions = [instruction]


        //  create new file to receive data
        let dirPaths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
        let docsDir: AnyObject = dirPaths[0]
        let movieFilePath = docsDir.stringByAppendingPathComponent("result.mov")
        let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)

        do {
            try NSFileManager.defaultManager().removeItemAtPath(movieFilePath)
        }
        catch _ {}


        // use AVAssetExportSession to export video
        let assetExport = AVAssetExportSession(asset: mixComposition, presetName:AVAssetExportPresetHighestQuality)
        assetExport?.videoComposition = videoComp
        assetExport!.outputFileType = AVFileTypeQuickTimeMovie
        assetExport!.outputURL = movieDestinationUrl
        assetExport!.exportAsynchronouslyWithCompletionHandler({
            switch assetExport!.status{
            case  AVAssetExportSessionStatus.Failed:
                print("failed \(assetExport!.error)")
            case AVAssetExportSessionStatus.Cancelled:
                print("cancelled \(assetExport!.error)")
            default:
                print("Movie complete")


                // play video
                NSOperationQueue.mainQueue().addOperationWithBlock({ () -> Void in
                    print(movieDestinationUrl)
                })
            }
        })
    }
这是我要导出的内容: 输入图片描述
我尝试添加以下两种方法来旋转视频:
class func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {

    let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)

    let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]

    let transform = assetTrack.preferredTransform
    let assetInfo = orientationFromTransform(transform)
    var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width

    if assetInfo.isPortrait {

        scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
        let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
        instruction.setTransform(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor),
            atTime: kCMTimeZero)
    } else {

        let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
        var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
        if assetInfo.orientation == .Down {
            let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
            let windowBounds = UIScreen.mainScreen().bounds
            let yFix = assetTrack.naturalSize.height + windowBounds.height
            let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
            concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
        }
        instruction.setTransform(concat, atTime: kCMTimeZero)
    }

    return instruction
}

class func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
    var assetOrientation = UIImageOrientation.Up
    var isPortrait = false
    if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
        assetOrientation = .Right
        isPortrait = true
    } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
        assetOrientation = .Left
        isPortrait = true
    } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
        assetOrientation = .Up
    } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
        assetOrientation = .Down
    }
    return (assetOrientation, isPortrait)
}

我更新了我的combineImageVid()方法,并添加了这个。

let instruction = AVMutableVideoCompositionInstruction()

instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)

let mixVideoTrack = mixComposition.tracksWithMediaType(AVMediaTypeVideo)[0]

//let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mixVideoTrack)
//layerInstruction.setTransform(videoAsset.preferredTransform, atTime: kCMTimeZero)

let layerInstruction = videoCompositionInstructionForTrack(compositionVideoTrack, asset: videoAsset)
这给了我以下输出: enter image description here 所以我越来越接近了,但是我感觉由于轨道最初被错误加载,我需要在那里解决这个问题。此外,我不知道为什么现在有一个巨大的黑框。我想也许是因为我的图像层在这里占用了已加载的视频资产的边界:
aLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height)

然而,将其更改为某些小的宽度/高度并没有任何区别。我随后考虑添加一个裁剪REC来摆脱黑色正方形,但那也不起作用:(


按照Allen的建议不使用这两种方法:

class func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction

class func orientationFromTransform(transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) 

但将我的原始方法更新为以下形式:

videoLayer.frame    = CGRectMake(0, 0, videoSize.height, videoSize.width) //notice the switched width and height
...
videoComp.renderSize = CGSizeMake(videoSize.height,videoSize.width) //this make the final video in portrait
...
layerInstruction.setTransform(videoTrack.preferredTransform, atTime: kCMTimeZero) //important piece of information let composition know you want to rotate the original video in output

我们已经非常接近解决问题了,但现在似乎出现了编辑renderSize的问题。如果我将其更改为除横向尺寸以外的任何尺寸,就会得到这个结果:

输入图像描述


这个链接可能会对你有所帮助 https://dev59.com/5WLVa4cB1Zd3GeqPvmHo - super handsum
我也尝试过那个方法 :( 不过还是感谢你的建议。 - random
你能否尝试在combineImageVid()方法中更改这个 --- compositionVideoTrack.preferredTransform = CGAffineTransformMakeRotation(M_PI_2); - Evol Gate
@AnkitGupta根本没有帮助 :( 它似乎从不尊重那个设置。 - random
2个回答

21

这里是关于在苹果公司进行方向调整的文档:

https://developer.apple.com/library/ios/qa/qa1744/_index.html

如果你的原始视频是在iOS上以纵向模式拍摄的,那么它的实际大小仍然是横向的,但是在mov文件中它带有旋转的元数据。为了旋转你的视频,你需要修改你的第一个代码片段,并按照以下步骤进行操作:

videoLayer.frame    = CGRectMake(0, 0, videoSize.height, videoSize.width) //notice the switched width and height
...
videoComp.renderSize = CGSizeMake(videoSize.height,videoSize.width) //this make the final video in portrait
...
layerInstruction.setTransform(videoTrack.preferredTransform, atTime: kCMTimeZero) //important piece of information let composition know you want to rotate the original video in output

是的,你真的很接近了!


非常感谢您的帮助!我们已经非常接近了,但是在更改“renderSize”时似乎出现了问题。我已经更新了任何问题,并附上了.gif文件以说明情况。 - random
是的,在模拟器中运行它就是问题所在!你太棒了,非常感谢你的帮助!! - random
2
我尝试了这段代码,它可以在后置摄像头上运行,但在前置摄像头上仍然得到相同的结果。视频方向正确,但被切成了一半。 - Alvin John
请检查以下链接中有关前置摄像头的内容:https://dev59.com/PqHia4cB1Zd3GeqPUoNa?noredirect=1&lq=1 - Dania Delbani
如果self.currentCamera == .front { movieFileOutputConnection?.isVideoMirrored = false } - Dania Delbani
显示剩余2条评论

2
也许您应该检查videoTrack的preferredTransform,以便为其提供精确的渲染大小和转换:
CGAffineTransform transform = assetVideoTrack.preferredTransform;
CGFloat rotation = [self rotationWithTransform:transform]; 
//if been rotated
        if (rotation != 0)
        {
            //if rotation is 360°
            if (fabs((rotation - M_PI * 2)) >= valueOfError) {

                CGFloat m = rotation / M_PI;
                CGAffineTransform t1;
                //rotation is 90° or 270°
                if (fabs(m - 1/2.0) < valueOfError || fabs(m - 3/2.0) < valueOfError) {
                    self.mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
                    t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0);
                }
                //rotation is 180°
                if (fabs(m - 1.0) < valueOfError) {
                    t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.width, assetVideoTrack.naturalSize.height);
                }
                CGAffineTransform t2 = CGAffineTransformRotate(t1,rotation);
                //                CGAffineTransform transform = makeTransform(1.0, 1.0, 90, videoTrack.naturalSize.height, 0);
                [passThroughLayer setTransform:t2 atTime:kCMTimeZero];
            }
        }

//convert transform to radian
- (CGFloat)rotationWithTransform:(CGAffineTransform)t
{
    return atan2f(t.b, t.a);
}

抱歉,我已经翻译了这个解释。@luk2302 - JonphyChen

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接