iOS相机视频实时预览与拍摄的照片不一致

3

我正在处理相机相关的工作。

相机给用户呈现出实时视频流,当他们点击拍照按钮时,会创建一张图片并传递给用户。

问题在于这张图片被设计成位于最顶部位置,而这个位置比实时预览显示的位置更高。

你知道如何调整相机的框架,使得实时视频流的顶部与他们即将拍摄的图片的顶部匹配吗?

我以为下面的代码可以实现,但实际上不行。这是我的当前相机框架代码:

 //Add the device to the session, get the video feed it produces and add it to the video feed layer
    func initSessionFeed()
    {
_session = AVCaptureSession()
        _session.sessionPreset = AVCaptureSessionPresetPhoto
        updateVideoFeed()

        _videoPreviewLayer = AVCaptureVideoPreviewLayer(session: _session)
        _videoPreviewLayer.frame = CGRectMake(0,0, self.frame.width, self.frame.width) //the live footage IN the video feed view
        _videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
        self.layer.addSublayer(_videoPreviewLayer)//add the footage from the device to the video feed layer
    }

    func initOutputCapture()
    {
        //set up output settings
        _stillImageOutput = AVCaptureStillImageOutput()
        var outputSettings:Dictionary = [AVVideoCodecJPEG:AVVideoCodecKey]
        _stillImageOutput.outputSettings = outputSettings
        _session.addOutput(_stillImageOutput)
        _session.startRunning()
    }

    func configureDevice()
    {
        if _currentDevice != nil
        {
            _currentDevice.lockForConfiguration(nil)
            _currentDevice.focusMode = .Locked
            _currentDevice.unlockForConfiguration()
        }
    }

    func captureImage(callback:(iImage)->Void)
    {
        if(_captureInProcess == true)
        {
            return
        }
        _captureInProcess = true

        var videoConnection:AVCaptureConnection!
        for connection in _stillImageOutput.connections
        {
            for port in (connection as AVCaptureConnection).inputPorts
            {
                if (port as AVCaptureInputPort).mediaType == AVMediaTypeVideo
                {
                    videoConnection = connection as AVCaptureConnection
                    break;
                }

                if videoConnection != nil
                {
                    break;
                }
            }
        }

        if videoConnection  != nil
        {
            _stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection)
            {
                (imageSampleBuffer : CMSampleBuffer!, _) in
                let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
                var pickedImage = UIImage(data: imageDataJpeg, scale: 1)
                UIGraphicsBeginImageContextWithOptions(pickedImage.size, false, pickedImage.scale)
                pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                pickedImage = UIGraphicsGetImageFromCurrentImageContext() //this returns a normalized image
                if(self._currentDevice == self._frontCamera)
                {
                    var context:CGContextRef = UIGraphicsGetCurrentContext()
                    pickedImage = UIImage(CGImage: pickedImage.CGImage, scale: 1.0, orientation: .UpMirrored)
                    pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                    pickedImage = UIGraphicsGetImageFromCurrentImageContext()
                }
                UIGraphicsEndImageContext()
                var image:iImage = iImage(uiimage: pickedImage)
                self._captureInProcess = false
                callback(image)
            }
        }
    }

如果我调整AVCaptureVideoPreviewLayer的框架,比如提高y值,那么我只会看到一个黑色条,显示偏移量。我非常好奇为什么视频帧的顶部与我的输出图像不匹配。
我确实对相机进行了“裁剪”,使其成为一个完美的正方形,但为什么实时摄像头的顶部不是实际的顶部(因为图像默认显示在更高的位置,摄像头的反馈则没有显示)。
更新:
以下是我所说的前后屏幕截图:
之前的: Before image 这是现场反馈显示的内容
之后的: After image 当用户点击拍照时,这就是结果图像

所以你想让结果图像显示在屏幕顶部,就像你在实时视频中看到的一样? - gabbler
我希望摄像头的实时画面(before)能够呈现拍照后用户看到的效果(after)。 - Aggressor
神奇的AV相机示例代码:http://drivecurrent.com/devops/using-swift-and-avfoundation-to-create-a-custom-camera-view-for-an-ios-app/#comment-4686 - Fattie
2个回答

1
代替
_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill

您可以尝试。
_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect

一般情况下,预览和捕获的图像宽度和高度必须匹配。您可能需要在预览或最终图像上进行更多的“裁剪”,或者两者都需要进行裁剪。

0

我遇到了同样的问题,这段代码对我起作用:

previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
var bounds = UIScreen.mainScreen().bounds

previewLayer?.bounds = bounds
previewLayer?.videoGravity = AVLayerVideoGravityResize
previewLayer?.position = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds))
self.cameraPreview.layer.addSublayer(previewLayer)
captureSession.startRunning()

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接