使用OpenCV和PyAudio同步音频和视频

19
我已经成功地使用OpenCV和PyAudio,但我不确定如何将它们同步。我无法从OpenCV获取帧速率,并且测量一帧的调用时间随时都在变化。然而,PyAudio的基础是获取特定的采样率。我该如何使它们以相同的速率同步?我想这里可能有一些标准或编解码器可以做到这一点。(我已经尝试过谷歌搜索,但只得到关于嘴唇同步的信息:/)。
OpenCV帧速率
from __future__ import division
import time
import math
import cv2, cv

vc = cv2.VideoCapture(0)
# get the frame
while True:

    before_read = time.time()
    rval, frame = vc.read()
    after_read  = time.time()
    if frame is not None:
        print len(frame)
        print math.ceil((1.0 / (after_read - before_read)))
        cv2.imshow("preview", frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    else:
        print "None..."
        cv2.waitKey(1)

# display the frame

while True:
    cv2.imshow("preview", frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

抓取并保存音频

from sys import byteorder
from array import array
from struct import pack

import pyaudio
import wave

THRESHOLD = 500
CHUNK_SIZE = 1024
FORMAT = pyaudio.paInt16
RATE = 44100

def is_silent(snd_data):
    "Returns 'True' if below the 'silent' threshold"
    print "\n\n\n\n\n\n\n\n"
    print max(snd_data)
    print "\n\n\n\n\n\n\n\n"
    return max(snd_data) < THRESHOLD

def normalize(snd_data):
    "Average the volume out"
    MAXIMUM = 16384
    times = float(MAXIMUM)/max(abs(i) for i in snd_data)

    r = array('h')
    for i in snd_data:
        r.append(int(i*times))
    return r

def trim(snd_data):
    "Trim the blank spots at the start and end"
    def _trim(snd_data):
        snd_started = False
        r = array('h')

        for i in snd_data:
            if not snd_started and abs(i)>THRESHOLD:
                snd_started = True
                r.append(i)

            elif snd_started:
                r.append(i)
        return r

    # Trim to the left
    snd_data = _trim(snd_data)

    # Trim to the right
    snd_data.reverse()
    snd_data = _trim(snd_data)
    snd_data.reverse()
    return snd_data

def add_silence(snd_data, seconds):
    "Add silence to the start and end of 'snd_data' of length 'seconds' (float)"
    r = array('h', [0 for i in xrange(int(seconds*RATE))])
    r.extend(snd_data)
    r.extend([0 for i in xrange(int(seconds*RATE))])
    return r

def record():
    """
    Record a word or words from the microphone and 
    return the data as an array of signed shorts.

    Normalizes the audio, trims silence from the 
    start and end, and pads with 0.5 seconds of 
    blank sound to make sure VLC et al can play 
    it without getting chopped off.
    """
    p = pyaudio.PyAudio()
    stream = p.open(format=FORMAT, channels=1, rate=RATE,
        input=True, output=True,
        frames_per_buffer=CHUNK_SIZE)

    num_silent = 0
    snd_started = False

    r = array('h')

    while 1:
        # little endian, signed short
        snd_data = array('h', stream.read(1024))
        if byteorder == 'big':
            snd_data.byteswap()

        print "\n\n\n\n\n\n"
        print len(snd_data)
        print snd_data

        r.extend(snd_data)

        silent = is_silent(snd_data)

        if silent and snd_started:
            num_silent += 1
        elif not silent and not snd_started:
            snd_started = True

        if snd_started and num_silent > 1:
            break

    sample_width = p.get_sample_size(FORMAT)
    stream.stop_stream()
    stream.close()
    p.terminate()

    r = normalize(r)
    r = trim(r)
    r = add_silence(r, 0.5)
    return sample_width, r

def record_to_file(path):
    "Records from the microphone and outputs the resulting data to 'path'"
    sample_width, data = record()
    data = pack('<' + ('h'*len(data)), *data)

    wf = wave.open(path, 'wb')
    wf.setnchannels(1)
    wf.setsampwidth(sample_width)
    wf.setframerate(RATE)
    wf.writeframes(data)
    wf.close()

if __name__ == '__main__':
    print("please speak a word into the microphone")
    record_to_file('demo.wav')
    print("done - result written to demo.wav")

1
如果您已经安装了可用的 pyffmpeg,则可以尝试使用 ffmpeg 的视频(和音频)显示功能来代替使用 OpenCV 进行视频显示。 - boardrider
3个回答

2
我认为您最好使用GSreamer或ffmpeg,如果您在Windows上,则使用DirectShow。这些库可以处理音频和视频,并应该有某种复用器,以允许您正确地混合视频和音频。
但是,如果您真的想使用Opencv来完成此操作,您应该能够使用VideoCapture获取帧速率,您尝试使用这个了吗?
fps = cv.GetCaptureProperty(vc, CV_CAP_PROP_FPS)

另一种方法是通过将帧数除以持续时间来估算fps:

nFrames  = cv.GetCaptureProperty(vc, CV_CAP_PROP_FRAME_COUNT)
           cv.SetCaptureProperty(vc, CV_CAP_PROP_POS_AVI_RATIO, 1)
duration = cv.GetCaptureProperty(vc, CV_CAP_PROP_POS_MSEC)
fps = 1000 * nFrames / duration;

我不确定我理解您在这里尝试做什么:

before_read = time.time()
rval, frame = vc.read()
after_read  = time.time()

我认为只做after_read - before_read只能测量OpenCV加载下一帧所需的时间,而不能测量fps。 OpenCV并不尝试进行播放,它只是加载帧,并且将尝试以最快的速度进行加载,我认为没有办法进行配置。 我认为在显示每一帧后加入waitKey(1/fps)将实现你想要的效果。


尽管现在已经很晚了,但我没有使用GStreamer,因为我有特定的目标要达到,并且过去使用GStreamer时遇到了麻烦。 - Zimm3r

2
您可以拥有两个计数器,一个用于音频,一个用于视频。 当显示图像时,视频计数器将变为+(1/fps),而音频计数器将变为+sec,其中sec是您每次写入流的音频秒数。然后在代码的音频部分,您可以这样做: While audiosec-videosec>=0.05: # 音频领先 time.sleep(0.05)
在视频部分 While videosec-audiosec>=0.2:# 视频领先 time.sleep(0.2)
您可以尝试不同的数字
这是我最近使用pyaudio和ffmpeg而不是cv2实现自己的视频播放器项目时实现某种同步的方式。

1
个人而言,我用了线程来实现这个。
import concurrent.futures
import pyaudio
import cv2
class Aud_Vid():

def __init__(self, arg):
    self.video = cv2.VideoCapture(0)
    self.CHUNK = 1470
    self.FORMAT = pyaudio.paInt16
    self.CHANNELS = 2
    self.RATE = 44100
    self.audio = pyaudio.PyAudio()
    self.instream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,input=True,frames_per_buffer=self.CHUNK)
    self.outstream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,output=True,frames_per_buffer=self.CHUNK)


def sync(self):
      with concurrent.futures.ThreadPoolExecutor() as executor:
              tv = executor.submit(self.video.read)
              ta = executor.submit(self.instream.read,1470)
              vid = tv.result()
              aud = ta.result()
              return(vid[1].tobytes(),aud)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接