我在一个iOS应用程序示例中使用CoreML和我的自定义训练的目标检测模型。当在视频帧上使用该模型时,它能够表现良好,并显示正确的类别检测和边界框。
但是,当在图像上使用时,边界框检测全部错误,并且所有预测都被归类为1个类别。
在这两种情况下,模型的设置是相同的。
模型预测调用的处理方式如下:
func processClassifications(for request: VNRequest, error: Error?) -> [Prediction]? {
let results = request.results
let results1 = results as! [VNCoreMLFeatureValueObservation]
let results2 = try? postprocess().prediction( output: results1[0].featureValue.multiArrayValue! )
// Some processing from results2 -> predictions
return predictions
}
对于视频:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// self.visionModel is same as the " MODEL_TF2keras_OutConv12().model " below...
guard let visionModel = self.visionModel
var requestOptions:[VNImageOption : Any] = [:]
if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
requestOptions = [.cameraIntrinsics:cameraIntrinsicData]
}
let orientation = CGImagePropertyOrientation(rawValue: UInt32(EXIFOrientation.rightTop.rawValue))
let trackingRequest = VNCoreMLRequest(model: visionModel) { (request, error) in
guard let predictions = self.processClassifications(for: request, error: error) else { return }. // This function performs the coreML model on the frame and return the predictions.
}
trackingRequest.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: orientation!, options: requestOptions)
try imageRequestHandler.perform([trackingRequest])
}
}
对于单个图像的预测处理如下:
lazy var classificationRequest: VNCoreMLRequest = {
let model = try VNCoreMLModel(for: MODEL_TF2keras_OutConv12().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
let predictions = self?.processClassifications(for: request, error: error)
})
request.imageCropAndScaleOption = VNImageCropAndScaleOption.centerCrop
return request
}
}()
func updateClassifications(for image: UIImage) {
let orientation = CGImagePropertyOrientation(image.imageOrientation)
guard let ciImage = CIImage(image: image)
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
handler.perform([self.classificationRequest])
}
在我看来,问题在于视频情况下使用CVPixelbuffer,单张图片情况下使用CIImage。问题是:为什么在函数和模型调用相同的情况下会出现这样的差异。如何解决这个问题?感谢您的帮助。