ARKIT:使用PanGesture移动物体(正确的方法)

13
我一直在阅读StackOverflow上有关如何通过拖动跨屏幕移动对象的答案。有些人使用与.featurePoints的命中测试,有些人使用手势翻译或仅跟踪对象的lastPosition。但是说实话,没有一种方法能按照每个人的期望工作。
对.featurePoints进行命中测试会使对象跳来跳去,因为当拖动手指时,并不总是命中featurepoint。我不明白为什么每个人都建议这样做。
像这样的解决方案可行:Dragging SCNNode in ARKit Using SceneKit 但是对象并没有真正跟随您的手指,并且当您走几步或更改对象或相机的角度并尝试移动对象时,x、z全部反转,这完全合理。
我真的想像Apple Demo那样好地移动对象,但是看着苹果的代码...太奇怪和过于复杂了,我甚至不能理解一点。他们移动对象的技巧非常美妙,与在线提出的所有建议都不接近。 https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality 肯定有更简单的方法。

你好,你找到任何解决方案或者使用哪种方法了吗?我也遇到了与平移手势相同的问题。任何帮助或指导将不胜感激。 - The iCoder
4个回答

6
简短回答: 要获得像苹果演示项目中那样流畅的拖动效果,您必须像在苹果演示项目(处理3D交互)中一样进行操作。另一方面,我同意您的观点,如果您第一次查看代码可能会感到困惑。对于放置在地板平面上的对象计算正确的移动量并不容易 - 总是从每个位置或视角进行计算。这是一个复杂的代码结构,可以实现出色的拖动效果。苹果公司做得很好,但并没有让我们轻松地实现它。
完整回答: 为了获得所需的结果,剥离AR Interaction模板会变成噩梦 - 但如果您投入足够的时间,也应该能够实现。如果您喜欢从头开始,请基本上使用常见的swift ARKit / SceneKit Xcode模板(包含太空飞船)。

您还需要从苹果获取整个AR交互模板项目(链接包含在SO问题中)。

最终,您应该能够拖动名为VirtualObject的东西,它实际上是一个特殊的SCNNode。此外,您将拥有一个漂亮的焦点方块,可以用于任何目的 - 比如最初放置对象或添加地板或墙壁。(一些拖动效果和焦点方块使用的代码有点合并或链接在一起 - 如果没有焦点方块,实际上会更加复杂)

开始吧:

将以下文件从AR交互模板复制到您的空项目中:

  • Utilities.swift(通常我将此文件命名为Extensions.swift,其中包含一些基本扩展,这些扩展是必需的)
  • FocusSquare.swift
  • FocusSquareSegment.swift
  • ThresholdPanGesture.swift
  • VirtualObject.swift
  • VirtualObjectLoader.swift
  • VirtualObjectARView.swift

像这样将UIGestureRecognizerDelegate添加到ViewController类定义中:

class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {

将以下代码添加到您的ViewController.swift文件中定义部分的viewDidLoad之前:
// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
    let bounds = sceneView.bounds
    return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true


// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?

/**
 The object that has been most recently intereacted with.
 The `selectedObject` can be moved at any time with the tap gesture.
 */
var selectedObject: VirtualObject?

/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
    didSet {
        guard trackedObject != nil else { return }
        selectedObject = trackedObject
    }
}

/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

在viewDidLoad中,在设置场景之前添加以下代码:
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self

// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

在你的ViewController.swift文件的最后添加如下代码:
// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
@objc
func didPan(_ gesture: ThresholdPanGesture) {
    switch gesture.state {
    case .began:
        // Check for interaction with a new object.
        if let object = objectInteracting(with: gesture, in: sceneView) {
            trackedObject = object // as? VirtualObject
        }

    case .changed where gesture.isThresholdExceeded:
        guard let object = trackedObject else { return }
        let translation = gesture.translation(in: sceneView)

        let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))

        // The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
        currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)

        gesture.setTranslation(.zero, in: sceneView)

    case .changed:
        // Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
        break

    case .ended:
        // Update the object's anchor when the gesture ended.
        guard let existingTrackedObject = trackedObject else { break }
        addOrUpdateAnchor(for: existingTrackedObject)
        fallthrough

    default:
        // Clear the current position tracking.
        currentTrackingPosition = nil
        trackedObject = nil
    }
}

// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
    // If the anchor is not nil, remove it from the session.
    if let anchor = object.anchor {
        sceneView.session.remove(anchor: anchor)
    }

    // Create a new anchor with the object's current transform and add it to the session
    let newAnchor = ARAnchor(transform: object.simdWorldTransform)
    object.anchor = newAnchor
    sceneView.session.add(anchor: newAnchor)
}


private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
    for index in 0..<gesture.numberOfTouches {
        let touchLocation = gesture.location(ofTouch: index, in: view)

        // Look for an object directly under the `touchLocation`.
        if let object = virtualObject(at: touchLocation) {
            return object
        }
    }

    // As a last resort look for an object under the center of the touches.
    // return virtualObject(at: gesture.center(in: view))
    return virtualObject(at: (gesture.view?.center)!)
}


/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {

    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
    // let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
    // let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)

    return hitTestResults.lazy.compactMap { result in
        return VirtualObject.existingObjectContainingNode(result.node)
        }.first
}

/**
 If a drag gesture is in progress, update the tracked object's position by
 converting the 2D touch location on screen (`currentTrackingPosition`) to
 3D world space.
 This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
 allowing drag gestures to move virtual objects regardless of whether one
 drags a finger across the screen or moves the device through space.
 - Tag: updateObjectToCurrentTrackingPosition
 */
@objc
func updateObjectToCurrentTrackingPosition() {
    guard let object = trackedObject, let position = currentTrackingPosition else { return }
    translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}

/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
    guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
        let result = smartHitTest(screenPos,
                                  infinitePlane: infinitePlane,
                                  objectPosition: object.simdWorldPosition,
                                  allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }

    let planeAlignment: ARPlaneAnchor.Alignment
    if let planeAnchor = result.anchor as? ARPlaneAnchor {
        planeAlignment = planeAnchor.alignment
    } else if result.type == .estimatedHorizontalPlane {
        planeAlignment = .horizontal
    } else if result.type == .estimatedVerticalPlane {
        planeAlignment = .vertical
    } else {
        return
    }

    /*
     Plane hit test results are generally smooth. If we did *not* hit a plane,
     smooth the movement to prevent large jumps.
     */
    let transform = result.worldTransform
    let isOnPlane = result.anchor is ARPlaneAnchor
    object.setTransform(transform,
                        relativeTo: cameraTransform,
                        smoothMovement: !isOnPlane,
                        alignment: planeAlignment,
                        allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***

添加一些焦点方形代码

// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
    if isObjectVisible {
        focusSquare.hide()
    } else {
        focusSquare.unhide()
    }

    // Perform hit testing only when ARKit tracking is in a good state.
    if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
        let result = smartHitTest(screenCenter) {
        DispatchQueue.main.async {
            self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
            self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
        }
    } else {
        DispatchQueue.main.async {
            self.focusSquare.state = .initializing
            self.sceneView.pointOfView?.addChildNode(self.focusSquare)
        }
    }
}

并添加一些控制功能:

func hideFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare()  { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

从VirtualObjectARView.swift中复制整个smartHitTest函数到ViewController.swift中(这样它们就存在两次)。
func smartHitTest(_ point: CGPoint,
                  infinitePlane: Bool = false,
                  objectPosition: float3? = nil,
                  allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {

    // Perform the hit test.
    let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])

    // 1. Check for a result on an existing plane using geometry.
    if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
        let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
        return existingPlaneUsingGeometryResult
    }

    if infinitePlane {

        // 2. Check for a result on an existing plane, assuming its dimensions are infinite.
        //    Loop through all hits against infinite existing planes and either return the
        //    nearest one (vertical planes) or return the nearest one which is within 5 cm
        //    of the object's position.
        let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)

        for infinitePlaneResult in infinitePlaneResults {
            if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
                if planeAnchor.alignment == .vertical {
                    // Return the first vertical plane hit test result.
                    return infinitePlaneResult
                } else {
                    // For horizontal planes we only want to return a hit test result
                    // if it is close to the current object's position.
                    if let objectY = objectPosition?.y {
                        let planeY = infinitePlaneResult.worldTransform.translation.y
                        if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
                            return infinitePlaneResult
                        }
                    } else {
                        return infinitePlaneResult
                    }
                }
            }
        }
    }

    // 3. As a final fallback, check for a result on estimated planes.
    let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
    let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
    switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
    case (true, false):
        return hResult
    case (false, true):
        // Allow fallback to horizontal because we assume that objects meant for vertical placement
        // (like a picture) can always be placed on a horizontal surface, too.
        return vResult ?? hResult
    case (true, true):
        if hResult != nil && vResult != nil {
            return hResult!.distance < vResult!.distance ? hResult! : vResult!
        } else {
            return hResult ?? vResult
        }
    default:
        return nil
    }
}

您可能会在复制的函数中看到一些关于hitTest的错误。只需像这样进行更正:

hitTest... // which gives an Error
sceneView.hitTest... // this should correct it

实现渲染器的updateAtTime函数并添加以下代码:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
    // For the Focus Square
    if isFocusSquareEnabled { showFocusSquare() }

    self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}

最后,为焦点平面添加一些辅助函数。
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } }  // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square

此时,您可能仍会在导入的文件中看到大约十几个错误和警告,这可能发生在使用Swift 5并且您有一些Swift 4文件的情况下。只需让Xcode纠正错误即可。(这都是关于重命名一些代码语句,Xcode最懂)

进入VirtualObject.swift并搜索此代码块:

if smoothMovement {
    let hitTestResultDistance = simd_length(positionOffsetFromCamera)

    // Add the latest position and keep up to 10 recent distances to smooth with.
    recentVirtualObjectDistances.append(hitTestResultDistance)
    recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))

    let averageDistance = recentVirtualObjectDistances.average!
    let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
    simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
    simdPosition = cameraWorldPosition + positionOffsetFromCamera
}

注释掉或替换掉这整个代码块,使用以下一行代码:

simdPosition = cameraWorldPosition + positionOffsetFromCamera

此时,您应该能够编译项目并在设备上运行它。您应该能够看到飞船和一个黄色的焦点方块,这些应该已经可以工作了。

要开始放置一个可拖动的对象,您需要一些函数来创建所谓的VirtualObject,就像我在开头说的那样。

使用此示例函数进行测试(将其添加到视图控制器中的某个位置):

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {

    if focusSquare.state != .initializing {
        let position = SCNVector3(focusSquare.lastPosition!)

        // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
        let testObject = VirtualObject() // give it some name, when you dont have anything to load
        testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
        testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
        testObject.categoryBitMask = 0b00000010
        testObject.name = "test"
        testObject.castsShadow = true
        testObject.position = position

        sceneView.scene.rootNode.addChildNode(testObject)
    }
}

注意:您想在平面上拖动的所有内容,必须使用VirtualObject()进行设置,而不是SCNNode()。关于VirtualObject的其他方面与SCNNode相同。
(您还可以添加一些常见的SCNNode扩展,例如按名称加载场景的扩展-在引用导入模型时非常有用)
玩得开心!

4

虽然晚了点,但我知道我也曾在解决这个问题时遇到了一些困难。最终,我找到了一种方法,每当我的手势识别器被调用时,就执行两次单独的命中测试。

首先,我对我的 3D 对象进行命中测试,以检测当前是否正在按下对象(如果您不指定任何选项,则会得到有关按下 featurePoints、平面等的结果)。我使用 SCNHitTestOption.categoryBitMask 值来实现这一点。 请记住,为了使命中测试起作用,您必须事先为对象节点及其所有子节点分配正确的 .categoryBitMask 值。 我声明了一个枚举,可以用于此:

enum BodyType : Int {
    case ObjectModel = 2;
}

在我发布的这里关于.categoryBitMask值的问题的答案中,很明显需要考虑您分配掩码的值。

以下是我与UILongPressGestureRecognizer一起使用的代码,以选择我当前按下的对象:

guard let recognizerView = recognizer.view as? ARSCNView else { return }

let touch = recognizer.location(in: recognizerView)

let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }

接下来,我会进行第二次点击测试,以找到我正在按压的平面。 如果您不想将对象移动超出平面边缘,则可以使用类型.existingPlaneUsingExtent,如果您希望沿着检测到的平面表面无限移动对象,则可以使用.existingPlane

 var planeHit : ARHitTestResult!

 if recognizer.state == .changed {

     let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
     guard hitTestPlane.first != nil else { return }
     planeHit = hitTestPlane.first!
     modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)

 }else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{

     modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)

 }

我在试用ARAnchors时创建了一个GitHub仓库。如果您想看看我的方法如何实践,可以查看该仓库,但我并没有打算让其他人使用,所以它还没有完成。此外,开发分支应该支持具有更多childNodes的对象的某些功能。
编辑: ==================================
为了澄清,如果要使用.scn对象而不是常规几何体,则需要在创建对象时遍历所有子节点,并设置每个子节点的位掩码,像这样:
 let objectModelScene = SCNScene(named:
        "art.scnassets/object/object.scn")!
 let objectNode =  objectModelScene.rootNode.childNode(
        withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
 objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
 objectNode.enumerateChildNodes { (node, _) in
        node.categoryBitMask = BodyType.ObjectModel.rawValue
    }

然后,在手势识别器中,当你获取到一个hitTestResult之后,
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])

你需要找到父节点,否则可能会移动刚刚点击的单个子节点。通过递归向上搜索刚刚找到的节点的节点树来完成此操作。

guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }

您需要将getParentNode方法声明如下

func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? { 
    if let node = nodeFound {
        if node.name == "theNameOfTheParentNodeOfTheObject" {
            return node
        } else if let parent = node.parent {
            return getParentNodeOf(parent)
        }
    }
    return nil
}

那么你就可以在objectNode上执行任何操作,因为它将是您的.scn对象的父节点,这意味着对它施加的任何变换也将应用于子节点。


它在使用3D几何对象时运行良好,但是如何通过添加3D对象的场景文件(.scn)来实现呢?我尝试过了,但是它不起作用。如果您已经尝试过.scn文件,则对我很有帮助。 - BSB
你应该遍历 scn 对象的所有节点(父节点和所有子节点),设置每个节点的位掩码。这是因为在按下屏幕时,否则可能会选择一个没有正确位掩码的节点。 - A. Claesson
在点击手势中,我会像这样添加节点: guard let frameScene = SCNScene(named: "art.scnassets/vase/vase.scn"), let frameNode = frameScene.rootNode.childNode(withName: "vase", recursively: true) else { return nil } frameNode.categoryBitMask = BodyType.ObjectModel.rawValue return frameNode - BSB
是的,你只设置了父节点的位掩码。尝试添加以下内容:frameNode.enumerateChildNodes { (node, _) in node.categoryBitMask = BodyType.ObjectModel.rawValue
}. 此外,当你获取modelNodeHit的hitTestResult时,你需要递归迭代到父节点。如果你查看我发布的github repo的development-branch,你应该会找到你需要的方法。
- A. Claesson
通过添加这个,花瓶在移动,但是花和叶子的节点被散开了。有些移动了但不正确,有些则没有移动。 - BSB
显示剩余2条评论

4

我在Claessons的回答中添加了一些自己的想法。当拖动节点时,我注意到有点延迟。我发现节点不能跟随手指的移动。

为了使节点移动更加流畅,我添加了一个变量来跟踪当前正在移动的节点,并将位置设置为触摸位置。

    var selectedNode: SCNNode?

此外,我设置了一个.categoryBitMask值,以指定要编辑(移动)的节点类别。 默认的掩码值为1。

设置类别位掩码的原因是区分不同类型的节点,并指定您希望选择的节点(以移动等方式)。

    enum CategoryBitMask: Int {
        case categoryToSelect = 2        // 010
        case otherCategoryToSelect = 4   // 100
        // you can add more bit masks below . . .
    }

然后,我在viewDidLoad()中添加了一个UILongPressGestureRecognizer

        let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
        self.sceneView.addGestureRecognizer(longPressRecognizer)

下面是我用来检测长按并启动节点拖动的 UILongPressGestureRecognizer
首先,从 recognizerView 中获取触摸位置 location
    @objc func longPressed(recognizer: UILongPressGestureRecognizer) {

       guard let recognizerView = recognizer.view as? ARSCNView else { return }
       let touch = recognizer.location(in: recognizerView)


以下代码在检测到长按时运行一次。
在这里,我们执行一个“hitTest”来选择被触摸的节点。请注意,在这里,我们指定了一个“.categoryBitMask”选项,以仅选择以下类别的节点:CategoryBitMask.categoryToSelect
       // Runs once when long press is detected.
       if recognizer.state == .began {
            // perform a hitTest
            let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])

            guard let hitNode = hitTestResult.first?.node else { return }

            // Set hitNode as selected
            self.selectedNode = hitNode

以下代码将会定期运行,直至用户松开手指。在此,我们执行另一个hitTest以获取您想要节点沿着移动的平面。
        // Runs periodically after .began
        } else if recognizer.state == .changed {
            // make sure a node has been selected from .began
            guard let hitNode = self.selectedNode else { return }

            // perform a hitTest to obtain the plane 
            let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
            guard let hitPlane = hitTestPlane.first else { return }
            hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
                                           hitNode.position.y,
                                           hitPlane.worldTransform.columns.3.z)

在手指离开屏幕时,确保取消选中节点。

        // Runs when finger is removed from screen. Only once.
        } else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{

            guard let hitNode = self.selectedNode else { return }

            // Undo selection
            self.selectedNode = nil
        }
    }

0

正如@ZAY所提到的,苹果公司让事情变得非常混乱,他们还使用了ARRaycastQuery,它只适用于iOS 13及以上版本。因此,我想出了一种解决方案,通过使用当前相机方向来计算世界坐标系中平面上的平移。

首先,通过使用这个片段代码,我们能够使用四元数获取用户面对的当前方向。

private func getOrientationYRadians()-> Float {
    guard let cameraNode = arSceneView.pointOfView else { return 0 }
    
    //Get camera orientation expressed as a quaternion
    let q = cameraNode.orientation
    
    //Calculate rotation around y-axis (heading) from quaternion and convert angle so that
    //0 is along -z-axis (forward in SceneKit) and positive angle is clockwise rotation.
    let alpha = Float.pi - atan2f( (2*q.y*q.w)-(2*q.x*q.z), 1-(2*pow(q.y,2))-(2*pow(q.z,2)) )

    // here I convert the angle to be 0 when the user is facing +z-axis 
    return alpha <= Float.pi ? abs(alpha - (Float.pi)) : (3*Float.pi) - alpha
}

处理 Pan 方法

private var lastPanLocation2d: CGPoint!
@objc func handlePan(panGesture: UIPanGestureRecognizer) {
    let state = panGesture.state
    
    guard state != .failed && state != .cancelled else {
        return
    }
    
    let touchLocation = panGesture.location(in: self)
    
    if (state == .began) {
        lastPanLocation2d = touchLocation
    }
    
    // 200 here is a random value that controls the smoothness of the dragging effect
    let deltaX = Float(touchLocation.x - lastPanLocation2d!.x)/200
    let deltaY = Float(touchLocation.y - lastPanLocation2d!.y)/200
    
    let currentYOrientationRadians = getOrientationYRadians()
    // convert delta in the 2D dimensions to the 3d world space using the current rotation
    let deltaX3D = (deltaY*sin(currentYOrientationRadians))+(deltaX*cos(currentYOrientationRadians))
    let deltaY3D = (deltaY*cos(currentYOrientationRadians))+(-deltaX*sin(currentYOrientationRadians))
    
    // assuming that the node is currently positioned on a plane so the y-translation will be zero
    let translation = SCNVector3Make(deltaX3D, 0.0, deltaY3D)
    nodeToDrag.localTranslate(by: translation)
    
    lastPanLocation2d = touchLocation
}

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接