从CMSampleBuffer创建UIImage

20

这与无数有关将CMSampleBuffer转换为UIImage的问题不同。我只是想知道为什么我不能像这样转换:

CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * imageFromCoreImageLibrary = [CIImage imageWithCVPixelBuffer: pixelBuffer];
UIImage * imageForUI = [UIImage imageWithCIImage: imageFromCoreImageLibrary];

因为它适用于YCbCr颜色空间以及RGBA和其他颜色空间,所以它看起来简单得多。那段代码有什么问题吗?


我知道这个问题很老了,但这可以作为其他人的参考。我只想提一下你的第一行:你正在分配一个来自CMSampleBufferGetImageBufferCVImageBufferRef并将其强制转换为CVPixelBufferRef,这是两件不同的事情... - Duck
这个答案会有所帮助:https://dev59.com/olgQ5IYBdhLWcg3wSiB1#43470666 - Rubaiyat Jahan Mumu
8个回答

27

对于JPEG图像:

Swift 4:

let buff: CMSampleBuffer ...            // Have you have CMSampleBuffer 
if let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buff, previewPhotoSampleBuffer: nil) {
    let image = UIImage(data: imageData) //  Here you have UIImage
}

4
你是正确的。但如果有人在谷歌上搜索“UIImage from a CMSampleBuffer”,他会看到这篇帖子排在首位,就像我一样。这个答案正确地回答了标题中的问题。 - Alexander Volkov
6
因为根据文档,这仅适用于CMSampleBuffer是JPEG图像源的情况,而答案中未提及这一点,也没有问题要求这样做。对于许多使用.H264作为源的用例,这将失败。https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureStillImageOutput_Class/#//apple_ref/occ/clm/AVCaptureStillImageOutput/jpegStillImageNSDataRepresentation:。 - Luke Van In
正确,+[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:]不是JPEG示例缓冲区。在我的情况下崩溃了。 - Chanchal Warde
我得到了 CRASH uncaughtExceptionHandler: *** +[AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:] Not a jpeg sample buffer 2018-12-19 19:52:13.498514+0530 Stream[17414:2467572] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:] Not a jpeg sample buffer' - Mubin Mall
@Luke Van In,它就在那里,jpegPhotoDataRepresentation - Alexander Volkov

26

使用 Swift 3 和 iOS 10 的 AVCapturePhotoOutput: 包括:

import UIKit
import CoreData
import CoreMotion
import AVFoundation

创建一个用于预览的 UIView,并将其链接到主类。
  @IBOutlet var preview: UIView!

创建此内容以设置相机会话(kCVPixelFormatType_32BGRA非常重要!!):
  lazy var cameraSession: AVCaptureSession = {
    let s = AVCaptureSession()
    s.sessionPreset = AVCaptureSessionPresetHigh
    return s
  }()

  lazy var previewLayer: AVCaptureVideoPreviewLayer = {
    let previewl:AVCaptureVideoPreviewLayer =  AVCaptureVideoPreviewLayer(session: self.cameraSession)
    previewl.frame = self.preview.bounds
    return previewl
  }()

  func setupCameraSession() {
    let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice

    do {
      let deviceInput = try AVCaptureDeviceInput(device: captureDevice)

      cameraSession.beginConfiguration()

      if (cameraSession.canAddInput(deviceInput) == true) {
        cameraSession.addInput(deviceInput)
      }

      let dataOutput = AVCaptureVideoDataOutput()
      dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)]
      dataOutput.alwaysDiscardsLateVideoFrames = true

      if (cameraSession.canAddOutput(dataOutput) == true) {
        cameraSession.addOutput(dataOutput)
      }

      cameraSession.commitConfiguration()

      let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: [])
      dataOutput.setSampleBufferDelegate(self, queue: queue)

    }
    catch let error as NSError {
      NSLog("\(error), \(error.localizedDescription)")
    }
  }

在 WillAppear 中:

  override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)
    setupCameraSession()
  }

在 DidAppear 中:

  override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    preview.layer.addSublayer(previewLayer)
    cameraSession.startRunning()
  }

创建一个函数来捕获输出:
  func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

    // Here you collect each frame and process it
    let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
    self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer)
}

以下是将一个 kCVPixelFormatType_32BGRA CMSampleBuffer 转换成 UIImage 的代码,关键在于 bitmapInfo 必须对应于 32BGRA、32位小端序字节顺序、包含预乘第一部分和 alpha 信息:

  func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
  {
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    let  imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);


    // Get the number of bytes per row for the pixel buffer
    let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);

    // Get the number of bytes per row for the pixel buffer
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
    // Get the pixel buffer width and height
    let width = CVPixelBufferGetWidth(imageBuffer!);
    let height = CVPixelBufferGetHeight(imageBuffer!);

    // Create a device-dependent RGB color space
    let colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
    bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
    //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
    let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
    // Create a Quartz image from the pixel data in the bitmap graphics context
    let quartzImage = context?.makeImage();
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);

    // Create an image object from the Quartz image
    let image = UIImage.init(cgImage: quartzImage!);

    return (image);
  }

如何将RGBA转换为CMYK?不仅是调整每个像素,而是快速使用一些API。 - Alexander Volkov
方向怎么样? - Durdu
2
let image = UIImage.init(cgImage: quartzImage!); doesn't work, I get next error Thread 14: Fatal error: Unexpectedly found nil while unwrapping an Optional value - user924
默认情况下,它采用双平面格式,如 kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,因为这样更高效,需要较少的字节来表示图像。 - xaphod
如果您只是创建图像,请使用非平面格式,例如_32BGRA。如果您被困在平面上,请查看Accelerate/vImage框架...它更难以使用,但非常高效:https://developer.apple.com/documentation/accelerate/vimage - xaphod
显示剩余4条评论

14

使用以下代码将图像从PixelBuffer转换 选项1:

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context
                         createCGImage:ciImage
                         fromRect:CGRectMake(0, 0,
                                             CVPixelBufferGetWidth(pixelBuffer),
                                             CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiImage = [UIImage imageWithCGImage:myImage];

选项 2:

int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;

unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);

UIGraphicsBeginImageContext(CGSizeMake(w, h));

CGContextRef c = UIGraphicsGetCurrentContext();

unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
    int maxY = h;
    for(int y = 0; y<maxY; y++) {
        for(int x = 0; x<w; x++) {
            int offset = bytesPerPixel*((w*y)+x);
            data[offset] = buffer[offset];     // R
            data[offset+1] = buffer[offset+1]; // G
            data[offset+2] = buffer[offset+2]; // B
            data[offset+3] = buffer[offset+3]; // A
        }
    }
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

我正在寻找为什么我不能在我的帖子中使用这三行的原因。另外,您的方法是否也适用于YCbCr 420f和RGBA?难道你不需要独立访问分离的平面吗? - mrplants
因为您需要上下文来捕捉图像。而且,上下文支持不同类型的上下文,您可以根据需要进行更改,它仅支持RGBA、CMYK颜色空间。 - Dipen Panchasara
如何将上下文的颜色空间从YCbCr更改为RGB? - mrplants
基本上在IOS中有三种颜色空间可用,分别是: CGColorSpaceCreateDeviceCMYK(); CGColorSpaceCreateDeviceGray(); CGColorSpaceCreateDeviceRGB(); - Dipen Panchasara
请注意,自iOS 6.0以来,有一个+[UIImage imageWithCIImage:scale:orientation:]方法,可以直接从您的第一行转换为UIImage - natevw
显示剩余2条评论

12

我为了在Swift 4.x/3.x中生成UIImage,编写了一个简单的扩展,能够从CMSampleBuffer中产生。

此外,它还可以处理缩放和方向问题,但是如果默认值适合您,您也可以接受它们。

import UIKit
import AVFoundation

extension CMSampleBuffer {
    func image(orientation: UIImageOrientation = .up, 
               scale: CGFloat = 1.0) -> UIImage? {
        if let buffer = CMSampleBufferGetImageBuffer(self) {
            let ciImage = CIImage(cvPixelBuffer: buffer)

            return UIImage(ciImage: ciImage, 
                           scale: scale,
                           orientation: orientation)
        }

        return nil
    }
}
  1. 如果能够从图像获取缓冲数据,它将继续进行,否则返回nil。
  2. 使用缓冲区初始化CIImage
  3. 返回一个UIImage,它用ciImage值初始化,并带有scaleorientation值。 如果没有提供,则分别使用up1.0的默认值。

1
非常棒的扩展,不知道为什么没有任何赞。对我来说完美无瑕。 - Flupp
@Flupp 谢谢,我更喜欢尽可能使用扩展,因为它很好地适应了 Swift 的范例。 - CodeBender
谢谢你的回答。你有UIImage转换为CMSampleBuffer的代码吗? - Crashalot

5

Swift 5.0

if let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
   let ciimage = CIImage(cvImageBuffer: cvImageBuffer)
   let context = CIContext()

   if let cgImage = context.createCGImage(ciimage, from: ciimage.extent) {
      let uiImage = UIImage(cgImage: cgImage)
   }
}

虽然看起来有些冗余,但在我们的情况下,通过context.createCGImage(而不是简单地调用UIImage(ciImage:))可以正确保留缓冲区的尺寸。谢谢! - agurtovoy
为什么每次获取样本缓冲区时都要创建CIContext()? - famfamfam

4

所有人注意:不要使用如下方法:

    private let context = CIContext()

    private func imageFromSampleBuffer2(_ sampleBuffer: CMSampleBuffer) -> UIImage? {
        guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
        let ciImage = CIImage(cvPixelBuffer: imageBuffer)
        guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
        return UIImage(cgImage: cgImage)
    }

他们会占用更多的CPU,需要更长时间来转换

使用来自https://dev59.com/rmUo5IYBdhLWcg3w2CWJ#40193359的解决方案

不要忘记为AVCaptureVideoDataOutput设置下一个设置

    videoOutput = AVCaptureVideoDataOutput()

    videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
    //videoOutput.alwaysDiscardsLateVideoFrames = true

    videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "MyQueue"))

转换方法

    func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage {
        // Get a CMSampleBuffer's Core Video image buffer for the media data
        let  imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        // Lock the base address of the pixel buffer
        CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);


    // Get the number of bytes per row for the pixel buffer
    let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);

    // Get the number of bytes per row for the pixel buffer
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
    // Get the pixel buffer width and height
    let width = CVPixelBufferGetWidth(imageBuffer!);
    let height = CVPixelBufferGetHeight(imageBuffer!);

    // Create a device-dependent RGB color space
    let colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
    bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
    //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
    let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
    // Create a Quartz image from the pixel data in the bitmap graphics context
    let quartzImage = context?.makeImage();
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);

    // Create an image object from the Quartz image
    let image = UIImage.init(cgImage: quartzImage!);

    return (image);
}

3

这在与iOS 10 AVCapturePhotoOutput类相关的内容中将经常出现。假设用户想要拍照并调用capturePhoto(with:delegate:),您的设置包括请求一个预览图像。这是一种非常高效的获取预览图像的方式,但是您如何在界面中显示它呢?预览图像作为CMSampleBuffer到达委托方法的实现中:

func capture(_ output: AVCapturePhotoOutput, 
    didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?, 
    previewPhotoSampleBuffer: CMSampleBuffer?, 
    resolvedSettings: AVCaptureResolvedPhotoSettings, 
    bracketSettings: AVCaptureBracketedStillImageSettings?, 
    error: Error?) {

您需要将一个CMSampleBuffer,previewPhotoSampleBuffer转换为UIImage。您该如何做呢?像这样:
if let prev = previewPhotoSampleBuffer {
    if let buff = CMSampleBufferGetImageBuffer(prev) {
        let cim = CIImage(cvPixelBuffer: buff)
        let im = UIImage(ciImage: cim)
        // and now you have a UIImage! do something with it ...
    }
}

我已经尝试过这个,但是存在一个问题,即没有exif信息(例如:方向)的im。 - cleexiang
EXIF信息是单独到达的。这只是预览,没有其他的。 - matt
谢谢分享;你有UIImage转换为CMSampleBuffer的代码示例吗? - Crashalot

-1

Popigny的答案的Swift 4 / iOS 11版本:

import Foundation
import AVFoundation
import UIKit

class ViewController : UIViewController {
    let captureSession = AVCaptureSession()
    let photoOutput = AVCapturePhotoOutput()
    let cameraPreview = UIView(frame: .zero)
    let progressIndicator = ProgressIndicator()

    override func viewDidLoad() {
        super.viewDidLoad()

        setupVideoPreview()

        do {
            try setupCaptureSession()
        } catch {
            let errorMessage = String(describing:error)
            print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
            alert(title: "Error", message: errorMessage)
        }
    }

    private func setupCaptureSession() throws {
        let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
        let devices = deviceDiscovery.devices

        guard let captureDevice = devices.first else {
            let errorMessage = "No camera available"
            print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
            alert(title: "Error", message: errorMessage)
            return
        }

        let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
        captureSession.addInput(captureDeviceInput)
        captureSession.sessionPreset = AVCaptureSession.Preset.photo
        captureSession.startRunning()

        if captureSession.canAddOutput(photoOutput) {
            captureSession.addOutput(photoOutput)
        }
    }

    private func setupVideoPreview() {

        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.bounds = view.bounds
        previewLayer.position = CGPoint(x:view.bounds.midX, y:view.bounds.midY)
        previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill

        cameraPreview.layer.addSublayer(previewLayer)
        cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:#selector(capturePhoto)))

        cameraPreview.translatesAutoresizingMaskIntoConstraints = false

        view.addSubview(cameraPreview)

        let viewsDict = ["cameraPreview":cameraPreview]
        view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
        view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))

    }

    @objc func capturePhoto(_ sender: UITapGestureRecognizer) {
        progressIndicator.add(toView: view)
        let photoOutputSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
        photoOutput.capturePhoto(with: photoOutputSettings, delegate: self)
    }

    func saveToPhotosAlbum(_ image: UIImage) {
        UIImageWriteToSavedPhotosAlbum(image, self, #selector(photoWasSavedToAlbum), nil)
    }

    @objc func photoWasSavedToAlbum(_ image: UIImage, _ error: Error?, _ context: Any?) {
        alert(message: "Photo saved to device photo album")
    }

    func alert(title: String?=nil, message:String?=nil) {
        let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
        alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
        present(alert, animated:true)
    }

}

extension ViewController : AVCapturePhotoCaptureDelegate {
    func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {

        guard  let photoData = photo.fileDataRepresentation() else {
            let errorMessage = "Photo capture did not provide output data"
            print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
            alert(title: "Error", message: errorMessage)
            return
        }

        guard let image = UIImage(data: photoData) else {
            let errorMessage = "could not create image to save"
            print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
            alert(title: "Error", message: errorMessage)
            return
        }

        saveToPhotosAlbum(image)

        progressIndicator.hide()
    }
}

一个完整的示例项目,以便查看上下文:https://github.com/cruinh/CameraCapture

1
CMSampleBuffer在哪里? - user924
1
你没有理解问题。 - user924

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接