iOS SWIFT - WebRTC如何从前置摄像头切换到后置摄像头

7

默认情况下,WebRTC视频使用前置摄像头,这很好用。然而,我需要将其切换到后置摄像头,但我找不到任何可用于完成此操作的代码。应该编辑哪个部分呢?是本地视图(localView)、本地视频轨道(localVideoTrack)还是捕获器(capturer)?


你正在使用webrtc库还是openwebrtc库? - Durai Amuthan.H
@DuraiAmuthan.H 我正在使用libjingle_peerconnection pod。 - mrnobody
1
尝试从AVCaptureSession中删除现有的AVCameraInput,然后使用AVCaptureDevicePositionBack添加新的AVCameraInput到AVCaptureSession。 - Durai Amuthan.H
你找到了你所问的问题的解决方案吗? - Manoj kumar
7个回答

4

Swift 3.0

对等连接只能发送一个“RTCVideoTrack”用于视频流。

首先,要更换前后摄像头,您必须在对等连接上删除当前视频轨道。然后,在您需要的相机上创建新的“RTCVideoTrack”,并将其设置为对等连接。

我使用了这些方法。

func swapCameraToFront() {
    let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
    localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
    let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
    if localVideoTrack != nil {
        localStream?.addVideoTrack(localVideoTrack)
        delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
    }
    peerConnection?.remove(localStream)
    peerConnection?.add(localStream)
}

func swapCameraToBack() {
    let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
    localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
    let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
    if localVideoTrack != nil {
        localStream?.addVideoTrack(localVideoTrack)
        delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
    }
    peerConnection?.remove(localStream)
    peerConnection?.add(localStream)
}

虽然这可能回答了问题,但最好解释答案的基本部分,并可能解释OP代码中的问题。 - pirho
1
请问您能否添加一下如何使用后置摄像头生成视频轨道的方法?实现createLocalVideoTrackBackCamera方法。 - Ankit
向点对点连接添加新的流会给您的视频增加一些延迟。只需从本地流中添加/删除视频轨道即可。 - FedeH

2

目前我只有关于Ankit评论的答案是用Objective C语言编写的。稍后我会将其转换为Swift。

您可以查看下面的代码:

- (RTCVideoTrack *)createLocalVideoTrack {

    RTCVideoTrack *localVideoTrack = nil; 
    NSString *cameraID = nil; 
    for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
       if (captureDevice.position == AVCaptureDevicePositionFront) { 
        cameraID = [captureDevice localizedName]; break;
       }
    }

    RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID]; 
    RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints]; 
    RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints]; 
    localVideoTrack = [_factory videoTrackWithID:@"ARDAMSv0" source:videoSource];

       return localVideoTrack; 
   }

- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
    RTCVideoTrack *localVideoTrack = nil;
    //AVCaptureDevicePositionFront
    NSString *cameraID = nil;
    for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
     if (captureDevice.position == AVCaptureDevicePositionBack) {
         cameraID = [captureDevice localizedName];
         break;
     }
    }

  RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
  RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
  RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
  localVideoTrack = [_factory videoTrackWithID:@"ARDAMSv0" source:videoSource];

  return localVideoTrack;
}

这个回答存在严重的格式问题。请编辑和格式化您的回答,否则我不得不不幸地标记它。 - L. Guthardt

1
如果您决定使用官方的 Google 构建,请参考以下说明:
首先,在调用开始之前,您必须配置摄像头,最好在 ARDVideoCallViewDelegatedidCreateLocalCapturer 方法中进行。
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
    AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
    __weak AVCaptureDevice *device = [self findDeviceForPosition:position];
    if ([device lockForConfiguration:nil]) {
        if ([device isFocusPointOfInterestSupported]) {
            [device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
        }
    }
    AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
    if (format == nil) {
        RTCLogError(@"No valid formats for device %@", device);
        NSAssert(NO, @"");
        return;
    }
    NSInteger fps = [self selectFpsForFormat:format];
    [_capturer startCaptureWithDevice: device
                               format: format
                                  fps:fps completionHandler:^(NSError *    error) {
                                      NSLog(@"%@",error);
                                      if (error == nil) {
                                          completionHandler(true);
                                      }
                                  }];
}

不要忘记启用捕获设备是异步的,有时最好使用完成处理来确保一切按预期完成。

您可以在官方WERTC存储库中或者这个链接 https://github.com/WebKit/webkit/blob/master/Source/ThirdParty/libwebrtc/Source/webrtc/examples/objc/AppRTCMobile/ARDCaptureController.m 中检查它。 - GalaevAlexey

0

我不确定你在使用哪个版本的Chrome浏览器进行webrtc,但在v54及以上版本中,RTCAVFoundationVideoSource类有一个称为“bool”的属性名为“useBackCamera”。你可以利用这个属性在前置/后置摄像头之间切换。


问题与Chrome或任何浏览器无关,它涉及到本地iOS开发。 - Modo Ltunzher
1
@ModoLtunzher,我说的“chrome版本”其实是指webrtc libjingle的版本。抱歉让你产生了困惑。 - Harish Gupta

0

Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'

RTCAVFoundationVideoSource类有一个名为useBackCamera的属性,可用于切换所使用的相机。

@interface RTCAVFoundationVideoSource : RTCVideoSource

- (instancetype)init NS_UNAVAILABLE;

/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;

/** Returns whether rear-facing camera is available for use. */
@property(nonatomic, readonly) BOOL canUseBackCamera;

/** Switches the camera being used (either front or back). */
@property(nonatomic, assign) BOOL useBackCamera;

/** Returns the active capture session. */
@property(nonatomic, readonly) AVCaptureSession *captureSession;

以下是切换相机的实现代码。
var useBackCamera: Bool = false

func switchCamera() {
    useBackCamera = !useBackCamera
    self.switchCamera(useBackCamera: useBackCamera)
}

private func switchCamera(useBackCamera: Bool) -> Void {

    let localStream = peerConnection?.localStreams.first

    if let videoTrack = localStream?.videoTracks.first {
        localStream?.removeVideoTrack(videoTrack)
    }

    let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
    localStream?.addVideoTrack(localVideoTrack)

    self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)

    if let ls = localStream {
        peerConnection?.remove(ls)
        peerConnection?.add(ls)
    }
}

func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {

    let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
    videoSource.useBackCamera = useBackCamera
    let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
    return videoTrack
}

2
这已经不再有效。RTCAVFoundationVideoSource已被弃用 https://codereview.chromium.org/2987143003/ - FedeH

0
在当前版本的WebRTC中,RTCAVFoundationVideoSource已被弃用,并替换为通用的RTCVideoSource与RTCVideoCapturer实现的组合。
为了切换摄像头,我正在执行以下操作:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
    if (self.cameraPosition != position) {
      RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
      [localStream removeVideoTrack:self.localVideoTrack];
      //[self.peerConnection removeStream:localStream];
      self.localVideoTrack = [self createVideoTrack];

      [self startCaptureLocalVideoWithPosition:position completionHandler:^{
        [localStream addVideoTrack:self.localVideoTrack];
        //[self.peerConnection addStream:localStream];
        if (completionHandler) {
            completionHandler();
        }
      }];

      self.cameraPosition = position;
    }
}

看一下被注释的行,如果你开始添加/删除流到对等连接中,它会导致视频连接延迟。

我正在使用GoogleWebRTC-1.1.25102


0
对于仍然对这个主题有疑问的人来说, 除了RTCAVFoundationVideoSource被弃用之外,上述的其他解决方案都可以完成工作。
这是我解决问题的方法:
try? targetDevice?.lockForConfiguration()
...
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferWidthKey as NSString) : NSNumber(value: 480 as UInt32),
                            (kCVPixelBufferHeightKey as NSString) : NSNumber(value: 640 as UInt32)] as [String : Any]
if (capturer.captureSession.canAddOutput(dataOutput) == true) {
    if capturer.captureSession.outputs.count > 1 {
        capturer.captureSession.removeOutput(capturer.captureSession.outputs.last!)
    }
    capturer.captureSession.addOutput(dataOutput)
}
...
capturer.startCapture(with: targetDevice!,
                            format: targetFormat!,
                            fps: 30)

在哪里:

  • targetDevice是首选设备(前置或后置)
  • capturer是RTCCameraVideoCapturer

现在,在这个例子中,第二个if语句很重要。显然,如果在切换摄像头之前不删除我最初添加到captureSession的输出,流将在本地设备和远程设备上都会冻结。在这里,重要的是始终删除最后一个输出,而不是第一个。希望这对某人有所帮助。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接