OpenCV-Python:如何从实时视频流中获取最新帧或跳过旧帧

12
我已经在Python中使用OpenCV集成了一个IP摄像机,通过逐帧从实时流中处理视频处理。我将摄像头的FPS配置为1秒,以便我可以获得每秒1帧的缓冲区以进行处理,但我的算法需要4秒来处理每一帧,导致未处理的帧在缓冲区中停滞不前,并随着时间的推移而不断增长,造成指数级的延迟。为了解决这个问题,我创建了另一个线程,在其中调用cv2.grab() API来清除缓冲区,每次调用都将指针移向最新的帧。在主线程中,我调用retrieve()方法,它给我第一个线程抓取的最后一帧。通过这种设计,帧停滞问题得到了解决,指数级延迟也被消除,但是仍然无法消除12-13秒的恒定延迟。我怀疑当cv2.retrieve()被调用时,它没有获取到最新的帧,而是获取到最新帧的第4或第5帧。是否有OpenCV的API或任何其他设计模式可以解决这个问题,以便我可以获取最新的帧进行处理?

2
当算法所消耗的信息远远慢于信息产生速度时,您为什么需要一个大缓冲区呢?我的建议是使用只有两个图像槽的缓冲区。其中一个用于从相机写入(写缓冲区,仅一个图像),另一个用于处理读取(读缓冲区,仅一个图像)。在新图像来自相机时覆盖写缓冲区。 - harshkn
1
@harshkn,你能告诉我如何减小缓冲区大小吗? 我在我的带有Ubuntu 16.04的树莓派上尝试了“video.set(cv2.CAP_PROP_BUFFERSIZE,1)”。结果出现了一条消息,显示“VIDEOIO ERROR:V4L2:不支持设置属性#38 True”。 - Muhammad Abdullah
c++ - OpenCV VideoCapture lag due to the capture buffer - Stack Overflow中有一些详细解释(和解决方法)的好答案;然而这些答案是用C++编写的,你需要将其转换为Python。 - user202729
2个回答

2
如果你不介意牺牲速度,可以创建一个Python生成器来打开摄像头并返回帧。"最初的回答"
def ReadCamera(Camera):
    while True:
        cap = cv2.VideoCapture(Camera)
        (grabbed, frame) = cap.read()
        if grabbed == True:
            yield frame

现在当您想要处理这个框架时。原始回答被翻译为“最初的回答”。
for frame in ReadCamera(Camera):
      .....

这很完美地运行。除了打开和关闭相机会增加时间。"最初的回答"

0
最好的方法是使用线程, 这是我的代码来实现这个目标。
    """
This module contains the Streamer class, which is responsible for streaming the video from the RTSP camera.
Capture the video from the RTSP camera and store it in the queue.

NOTE:
    You can preprocess the data before flow from here
"""

import cv2
from queue import Queue
import time
from env import RESOLUTION_X, RESOLUTION_Y,FPS
from threading import Thread

class Streamer:
    def __init__(self,rtsp):
        """
        Initialize the Streamer object, which is responsible for streaming the video from the RTSP camera.
        stream (cv2.VideoCapture): The VideoCapture object.
        rtsp (str): The RTSP url.
        Q (Queue): The queue to store the frame.
        running (bool): The flag to indicate whether the Streamer is running or not.
        Args:
            rtsp (str): The RTSP url.
        """        
        print("Creating Streamer object for",rtsp)
        self.stream = cv2.VideoCapture(rtsp)
        self.rtsp = rtsp
        #bufferless VideoCapture
        # self.stream.set(cv2.CAP_PROP_BUFFERSIZE, 1)
        # self.stream.set(cv2.CAP_PROP_FPS, 10)
        self.stream.set(cv2.CAP_PROP_FRAME_WIDTH, RESOLUTION_X)
        self.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, RESOLUTION_Y)
        self.Q = Queue(maxsize=2)
        self.running = True
        
        
        print("Streamer object created for",rtsp)
    
    def info(self):
        """
        Print the information of the Streamer.
        """        
        print("==============================Stream Info==============================")
        print("| Stream:",self.rtsp,"|")
        print("| Queue Size:",self.Q.qsize(),"|")
        print("| Running:",self.running,"|")
        print("======================================================================")
            
    def get_processed_frame(self):
        """
        Get the processed frame from the Streamer.

        Returns:
            dict: The dictionary containing the frame and the time.
        """        
        if self.Q.empty():
            return None
        return self.Q.queue[0]
    
    
    def release(self):
        """
        Release the Streamer.
        """        
        self.stream.release()
        
    def stop(self):
        """
        Stop the Streamer.
        """        
        print("Stopping",self.stream,"Status",self.rtsp)
        self.running = False
    
    def start(self):
        """
        Start the Streamer.
        """        
        print("Starting streamer",self.stream, "Status",self.running)
        while self.running:
            
            # FOR VIDEO CAPTURE and TESTING FRAME BY FRAME REMOVE THIS COMMENT
            # while self.Q.full():
            #     time.sleep(0.00001)
            ret, frame = self.stream.read()
            # print(frame,ret)
            if not ret:
                print("NO Frame for",self.rtsp)
                continue
            frame =cv2.resize(frame,(RESOLUTION_X,RESOLUTION_Y))
            # exit()
            if not self.Q.full():
                print("Streamer PUT",self.Q.qsize())
                self.Q.put({"frame":frame,"time":time.time()})
                print("Streamer PUT END",self.Q.qsize())
            # exit()
            # time.sleep(1/FPS)
        self.release()
        
        
if __name__ == "__main__":
    streamer = Streamer("rtsp://localhost:8554/105")
    
    thread = Thread(target=streamer.start)
    thread.start()
    
    while streamer.running:
        data = streamer.get_processed_frame()
        if data is None:
            continue
        frame = data["frame"]
        cv2.imshow("frame",frame)
        cv2.waitKey(1)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接