SwiftUI:使用Firebase ML Kit识别元素并绘制矩形

4
我目前正在尝试在图像上方绘制Firebase ML Kit识别的文本框。目前,我尚未成功,因为所有文本框都显示在屏幕外,无法看到任何一个。我参考了这篇文章:https://medium.com/swlh/how-to-draw-bounding-boxes-with-swiftui-d93d1414eb00和该项目:https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift。以下是文本框应该显示的视图:
struct ImageScanned: View {
var image: UIImage
@Binding var rectangles: [CGRect]
@State var viewSize: CGSize = .zero

var body: some View {
    // TODO: fix scaling
       ZStack {
           Image(uiImage: image)
               .resizable()
               .scaledToFit()
               .overlay(
                   GeometryReader { geometry in
                    ZStack {
                        ForEach(self.transformRectangles(geometry: geometry)) { rect in
                            Rectangle()
                            .path(in: CGRect(
                                x: rect.x,
                                y: rect.y,
                                width: rect.width,
                                height: rect.height))
                            .stroke(Color.red, lineWidth: 2.0)
                        }
                    }
                }
           )
       }
}
private func transformRectangles(geometry: GeometryProxy) -> [DetectedRectangle] {
    var rectangles: [DetectedRectangle] = []

    let imageViewWidth = geometry.frame(in: .global).size.width
    let imageViewHeight = geometry.frame(in: .global).size.height
    let imageWidth = image.size.width
    let imageHeight = image.size.height

    let imageViewAspectRatio = imageViewWidth / imageViewHeight
    let imageAspectRatio = imageWidth / imageHeight
    let scale = (imageViewAspectRatio > imageAspectRatio)
      ? imageViewHeight / imageHeight : imageViewWidth / imageWidth

    let scaledImageWidth = imageWidth * scale
    let scaledImageHeight = imageHeight * scale
    let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
    let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)

    var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
    transform = transform.scaledBy(x: scale, y: scale)

    for rect in self.rectangles {
        let rectangle = rect.applying(transform)
        rectangles.append(DetectedRectangle(width: rectangle.width, height: rectangle.height, x: rectangle.minX, y: rectangle.minY))
    }
    return rectangles
}

}

struct DetectedRectangle: Identifiable {
    var id = UUID()
    var width: CGFloat = 0
    var height: CGFloat = 0
    var x: CGFloat = 0
    var y: CGFloat = 0
}

这是包含当前视图的父级视图:

struct StartScanView: View {
@State var showCaptureImageView: Bool = false
@State var image: UIImage? = nil
@State var rectangles: [CGRect] = []

var body: some View {
    ZStack {
        if showCaptureImageView {
            CaptureImageView(isShown: $showCaptureImageView, image: $image)
        } else {
            VStack {

                Button(action: {
                    self.showCaptureImageView.toggle()
                }) {
                    Text("Start Scanning")
                }

                // show here View with rectangles on top of image
                if self.image != nil {
                    ImageScanned(image: self.image ?? UIImage(), rectangles: $rectangles)
                }


                Button(action: {
                    self.processImage()
                }) {
                    Text("Process Image")
                }
            }
        }
    }
}

func processImage() {
    let scaledImageProcessor = ScaledElementProcessor()
    if image != nil {
        scaledImageProcessor.process(in: image!) { text in
            for block in text.blocks {
                for line in block.lines {
                    for element in line.elements {
                        self.rectangles.append(element.frame)
                    }
                }
            }
        }
    }
}

教程的计算导致矩形太大,而示例项目的则太小。(高度同理) 不幸的是,我找不到Firebase确定元素大小的尺寸。 看起来是这样的: enter image description here 如果根本不计算宽度和高度,则矩形的大小似乎接近它们应该有的大小(不完全),因此我认为ML Kit的大小计算并不是按照图像大小/宽度成比例进行的。

2个回答

7

这是我改变 foreach 循环的方式:

Image(uiImage: uiimage!).resizable().scaledToFit().overlay(
                     GeometryReader{ (geometry: GeometryProxy) in
                        ForEach(self.blocks , id: \.self){ (block:VisionTextBlock) in
                            Rectangle().path(in: block.frame.applying(self.transformMatrix(geometry: geometry, image: self.uiimage!))).stroke(Color.purple, lineWidth: 2.0)
                        }
                    }

            )

我不是翻译专家,但我会尽力为您服务。以下是您需要翻译的内容:

我将返回从 transformMatrix 函数的返回值转换为路径函数,而不是传递 x、y、宽度和高度。

我的 transformMatrix 函数如下:

    private func transformMatrix(geometry:GeometryProxy, image:UIImage) -> CGAffineTransform {

      let imageViewWidth = geometry.size.width
      let imageViewHeight = geometry.size.height
      let imageWidth = image.size.width
      let imageHeight = image.size.height

      let imageViewAspectRatio = imageViewWidth / imageViewHeight
      let imageAspectRatio = imageWidth / imageHeight
      let scale = (imageViewAspectRatio > imageAspectRatio) ?
        imageViewHeight / imageHeight :
        imageViewWidth / imageWidth

      // Image view's `contentMode` is `scaleAspectFit`, which scales the image to fit the size of the
      // image view by maintaining the aspect ratio. Multiple by `scale` to get image's original size.
      let scaledImageWidth = imageWidth * scale
      let scaledImageHeight = imageHeight * scale
      let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
      let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)

      var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
      transform = transform.scaledBy(x: scale, y: scale)
      return transform
    }
}

输出结果为:

模拟器中运行代码的截图


0

如果我尝试他们的方法,我得到的结果与我修改后的版本相同 - 矩形变得太小了(大约是他们所说大小的1/6)。 此外,他们正在使用UIKit而不是SwiftUI... - Mystic Muffin
我刚刚编辑了我的问题,这样就可以看到我进行了哪些修改。 - Mystic Muffin

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接