将ARKit面部追踪的3D网格投影到2D图像坐标

10
我正在使用ARKit收集面部网格的三维顶点。我已经阅读了: 将图像映射到 3D 面部网格跟踪和可视化面部
我有以下结构体:
 struct CaptureData {
        var vertices: [SIMD3<Float>]
        var verticesformatted: String {
            let verticesDescribed = vertices.map({ "\($0.x):\($0.y):\($0.z)" }).joined(separator: "~")
            return "<\(verticesDescribed)>"
        }
    }

我有一个Strat按钮用于捕捉顶点:

@IBAction private func startPressed() {
    captureData = [] // Clear data
    currentCaptureFrame = 0 //inital capture frame
    fpsTimer = Timer.scheduledTimer(withTimeInterval: 1/fps, repeats: true, block: {(timer) -> Void in self.recordData()})
}

 private var fpsTimer = Timer()
    private var captureData: [CaptureData] = [CaptureData]()
    private var currentCaptureFrame = 0

还需要一个停止按钮来停止捕获(保存数据):

 @IBAction private func stopPressed() {
        do {
            fpsTimer.invalidate() //turn off the timer
            let capturedData = captureData.map{$0.verticesformatted}.joined(separator:"")
            let dir: URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).last! as URL
            let url = dir.appendingPathComponent("facedata.txt")
            try capturedData.appendLineToURL(fileURL: url as URL)
        }
        catch {
            print("Could not write to file")
        }
    }

数据重编码函数

 private func recordData() {
        guard let data = getFrameData() else { return }
        captureData.append(data)
        currentCaptureFrame += 1
    }

获取帧数据的函数

private func getFrameData() -> CaptureData? {
    let arFrame = sceneView?.session.currentFrame!
    guard let anchor = arFrame?.anchors[0] as? ARFaceAnchor else {return nil}
    let vertices = anchor.geometry.vertices
    let data = CaptureData(vertices: vertices)
    return data
}

ARSCN扩展:

extension ViewController: ARSCNViewDelegate {
    
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        guard let faceAnchor = anchor as? ARFaceAnchor else { return }
        currentFaceAnchor = faceAnchor
        if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor) {
            node.addChildNode(contentNode)
        }
        selectedContentController.session = sceneView?.session
        selectedContentController.sceneView = sceneView
    }
    
    /// - Tag: ARFaceGeometryUpdate
    func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        guard anchor == currentFaceAnchor,
            let contentNode = selectedContentController.contentNode,
            contentNode.parent == node
            else { return }
        selectedContentController.session = sceneView?.session
        selectedContentController.sceneView = sceneView
        selectedContentController.renderer(renderer, didUpdate: contentNode, for: anchor)
    }
}

我正在尝试使用来自跟踪和可视化面部的示例代码:


// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

我还阅读了关于投影点的内容(但不确定哪个更适用):

func projectPoint(_ point: SCNVector3) -> SCNVector3

我的问题是如何使用上面的示例代码,并将收集到的3D人脸网格顶点转换为2D图像坐标?

我想要获取3D网格顶点以及它们对应的2D坐标。

目前,我可以这样捕捉面部网格点:<mesh_x: mesh_ y: mesh_ z:...>

我希望将我的网格点转换为图像坐标并将它们一起显示,如下所示:

期望的结果:<mesh_x: mesh_ y: mesh_ z:img_x: img_y...>

有什么建议吗?先谢谢!

1个回答

6
也许你可以使用SCNSceneRendererprojectPoint函数。
extension ARFaceAnchor{
    // struct to store the 3d vertex and the 2d projection point
    struct VerticesAndProjection {
        var vertex: SIMD3<Float>
        var projected: CGPoint
    }
    
    // return a struct with vertices and projection
    func verticeAndProjection(to view: ARSCNView) -> [VerticesAndProjection]{
        
        let points = geometry.vertices.compactMap({ (vertex) -> VerticesAndProjection? in

            let col = SIMD4<Float>(SCNVector4())
            let pos = SIMD4<Float>(SCNVector4(vertex.x, vertex.y, vertex.z, 1))
            
            let pworld = transform * simd_float4x4(col, col, col, pos)
            
            let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))

            let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y))
            return VerticesAndProjection(vertex:vertex, projected: p)
            })
        
        return points
    }
}

这里有一个方便的方法来获取位置:
extension matrix_float4x4 {
    
    /// Get the position of the transform matrix.
    public var position: SCNVector3 {
        get{
            return SCNVector3(self[3][0], self[3][1], self[3][2])
        }
    }
}

如果您想检查投影是否正确,请将调试子视图添加到ARSCNView实例中,然后使用其他几个扩展在视图上绘制2D点,例如:
extension UIView{
    
    private struct drawCircleProperty{
        static let circleFillColor = UIColor.green
        static let circleStrokeColor = UIColor.black
        static let circleRadius: CGFloat = 3.0
    }
    
    func drawCircle(point: CGPoint) {
    
        let circlePath = UIBezierPath(arcCenter: point, radius: drawCircleProperty.circleRadius, startAngle: CGFloat(0), endAngle: CGFloat(Double.pi * 2.0), clockwise: true)
        let shapeLayer = CAShapeLayer()
        shapeLayer.path = circlePath.cgPath
        shapeLayer.fillColor = drawCircleProperty.circleFillColor.cgColor
        shapeLayer.strokeColor = drawCircleProperty.circleStrokeColor.cgColor
        
        self.layer.addSublayer(shapeLayer)
    }
    
    func drawCircles(points: [CGPoint]){
        
        self.clearLayers()
        
        for point in points{
            self.drawCircle(point: point)
        }
    }
    
    func clearLayers(){
        if let subLayers = self.layer.sublayers {
            for subLayer in subLayers {
                subLayer.removeFromSuperlayer()
            }
        }
    }

您可以计算投影,并使用以下代码绘制点:
let points:[ARFaceAnchor.VerticesAndProjection] = faceAnchor.verticeAndProjection(to: sceneView)
     
// keep only the projected points
let projected = points.map{ $0.projected}
// draw the points !
self.debugView?.drawCircles(points: projected)

我可以看到所有3D顶点在2D屏幕上的投影(图片由https://thispersondoesnotexist.com生成)。

Show vertices projected on 2D view

我将此代码添加到了苹果演示项目中,可在https://github.com/hugoliv/projectvertices.git找到。


感谢您的回复!对于这行代码 let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z)),我收到了“值类型 'float4x4'(又名 'simd_float4x4')没有成员 'position'”的错误提示。 - swiftlearneer
1
啊,正如我在帖子中所说的那样,我为您提供了一种从simd_float4x4中提取位置的便捷方法:别忘了复制并粘贴以下扩展内容到某个地方: /// Get the position of the transform matrix. public var position: SCNVector3 { get{ return SCNVector3(self[3][0], self[3][1], self[3][2]) } } }``` - oliver
谢谢!我还在想关于2D坐标的项目点。如何验证2D数据是否正确和有效? - swiftlearneer
“正确和有效”是什么意思? - oliver
我的问题与使用项目点有关,2D实际上是用来干什么的?像素坐标吗?我能够获取这些点,但不确定它们是否有效,并且是否与3D顶点相对应:https://stackoverflow.com/questions/67305259/projectpoint-for-getting-2d-image-coordinates-in-arkit - swiftlearneer
是的,projectPoint 函数返回的 2D 点是在图像空间中定义的。 - oliver

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接