默认情况下,WebRTC视频使用前置摄像头,这很好用。然而,我需要将其切换到后置摄像头,但我找不到任何可用于完成此操作的代码。应该编辑哪个部分呢?是本地视图(localView)、本地视频轨道(localVideoTrack)还是捕获器(capturer)?
默认情况下,WebRTC视频使用前置摄像头,这很好用。然而,我需要将其切换到后置摄像头,但我找不到任何可用于完成此操作的代码。应该编辑哪个部分呢?是本地视图(localView)、本地视频轨道(localVideoTrack)还是捕获器(capturer)?
Swift 3.0
对等连接只能发送一个“RTCVideoTrack”用于视频流。
首先,要更换前后摄像头,您必须在对等连接上删除当前视频轨道。然后,在您需要的相机上创建新的“RTCVideoTrack”,并将其设置为对等连接。
我使用了这些方法。
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
createLocalVideoTrackBackCamera
方法。 - Ankit目前我只有关于Ankit评论的答案是用Objective C语言编写的。稍后我会将其转换为Swift。
您可以查看下面的代码:
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:@"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:@"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
ARDVideoCallViewDelegate
的 didCreateLocalCapturer
方法中进行。- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(@"No valid formats for device %@", device);
NSAssert(NO, @"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(@"%@",error);
if (error == nil) {
completionHandler(true);
}
}];
}
我不确定你在使用哪个版本的Chrome浏览器进行webrtc,但在v54及以上版本中,RTCAVFoundationVideoSource类有一个称为“bool”的属性名为“useBackCamera”。你可以利用这个属性在前置/后置摄像头之间切换。
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource类有一个名为useBackCamera的属性,可用于切换所使用的相机。
@interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
@property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
@property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
@property(nonatomic, readonly) AVCaptureSession *captureSession;
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
看一下被注释的行,如果你开始添加/删除流到对等连接中,它会导致视频连接延迟。
我正在使用GoogleWebRTC-1.1.25102
try? targetDevice?.lockForConfiguration()
...
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferWidthKey as NSString) : NSNumber(value: 480 as UInt32),
(kCVPixelBufferHeightKey as NSString) : NSNumber(value: 640 as UInt32)] as [String : Any]
if (capturer.captureSession.canAddOutput(dataOutput) == true) {
if capturer.captureSession.outputs.count > 1 {
capturer.captureSession.removeOutput(capturer.captureSession.outputs.last!)
}
capturer.captureSession.addOutput(dataOutput)
}
...
capturer.startCapture(with: targetDevice!,
format: targetFormat!,
fps: 30)
在哪里:
现在,在这个例子中,第二个if语句很重要。显然,如果在切换摄像头之前不删除我最初添加到captureSession的输出,流将在本地设备和远程设备上都会冻结。在这里,重要的是始终删除最后一个输出,而不是第一个。希望这对某人有所帮助。