OpenCV Python,从命名管道读取视频

6
我试图实现视频上所示的结果(使用netcat的第3种方法)https://www.youtube.com/watch?v=sYGdge3T30o 目的是从树莓派流视频到Ubuntu PC,并使用openCV和Python处理它。
我使用以下命令将视频流发送到PC:raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777,然后在PC上创建名为“fifo”的命名管道并重定向输出。
 nc -l -p 5777 -v > fifo

然后我尝试读取管道并在 Python 脚本中显示结果

import cv2
import sys

video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == False:
        pass

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

然而,我最终却遇到了错误

[mp3 @ 0x18b2940] Header missing 这个错误是由命令 video_capture = cv2.VideoCapture(r'fifo') 产生的。

当我将PC上netcat的输出重定向到文件中,并在python中读取它时,视频可以正常工作,但速度快了约10倍。

我知道问题出在python脚本上,因为nc传输是有效的(到文件),但我无法找到任何线索。

如何才能实现提供的视频所示的结果(方法3)?

2个回答

7
我也想在那个视频中实现相同的结果。最初,我尝试了与您类似的方法,但似乎cv2.VideoCapture()无法从命名管道中读取,需要进行更多的预处理。 ffmpeg 是解决问题的方式!您可以按照此链接中给出的说明安装和编译 ffmpeg:https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu 安装完成后,您可以像这样更改代码:
import cv2
import subprocess as sp
import numpy

FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
        '-i', 'fifo',             # fifo is the named pipe
        '-pix_fmt', 'bgr24',      # opencv requires bgr24 pixel format.
        '-vcodec', 'rawvideo',
        '-an','-sn',              # we want to disable audio processing (there is no audio)
        '-f', 'image2pipe', '-']    
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)

while True:
    # Capture frame-by-frame
    raw_image = pipe.stdout.read(640*480*3)
    # transform the byte read into a numpy array
    image =  numpy.fromstring(raw_image, dtype='uint8')
    image = image.reshape((480,640,3))          # Notice how height is specified first and then width
    if image is not None:
        cv2.imshow('Video', image)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
    pipe.stdout.flush()

cv2.destroyAllWindows()

不需要改变树莓派端的脚本或其他事情。
这对我来说非常有效,视频延迟可以忽略不计。希望这会有所帮助。

我猜这是在Linux桌面上运行的部分,但您似乎没有展示需要在Raspberry Pi上运行什么,或者如何运行两台机器设置的任何一端? - Mark Setchell
我们试图实现视频中展示的结果(第三种方法)https://www.youtube.com/watch?v=sYGdge3T30o,正如@Richard所提到的那样。所有的东西都与视频中所解释的一样。我只是想帮忙提供一个Python脚本,用于从命名管道中读取数据,这在视频中没有展示。 - Mohinish Chatterjee
1
我原本希望这可以让我使用ffmpeg的命令行参数来强制使用qsv和h264_qsv进行硬件解码,而不是使用OpenCV的隐藏默认值。虽然从技术上讲,这个答案确实做到了这一点,但实际上我看到了速度下降,而不是使用cv2.VideoCapture('filename.mp4')。我只能得到大约111fps,而不是259fps。(在同一系统上,ffmpeg解码到null可以获得超过1100fps)。我认为这很可能是因为所有数据都被传输了。至少这是一个很好的概念证明。 - TheAtomicOption

1

我曾遇到一个类似的问题,经过一些研究,最终我偶然发现了以下解决方案:

跳转至解决方案:https://stackoverflow.com/a/48675107/2355051

最终我采用了这个picamera python recipe

在树莓派上:(createStream.py)

import io
import socket
import struct
import time
import picamera

# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))

# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    with picamera.PiCamera() as camera:
        camera.resolution = (1024, 768)
        # Start a preview and let the camera warm up for 2 seconds
        camera.start_preview()
        time.sleep(2)

        # Note the start time and construct a stream to hold image data
        # temporarily (we could write it directly to connection but in this
        # case we want to find out the size of each capture first to keep
        # our protocol simple)
        start = time.time()
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
            # Write the length of the capture to the stream and flush to
            # ensure it actually gets sent
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()

            # Rewind the stream and send the image data over the wire
            stream.seek(0)
            connection.write(stream.read())

            # Reset the stream for the next capture
            stream.seek(0)
            stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

在处理流的机器上:(processStream.py)

import io
import socket
import struct
import cv2
import numpy as np

# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
    while True:
        # Read the length of the image as a 32-bit unsigned int. If the
        # length is zero, quit the loop
        image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
        if not image_len:
            break
        # Construct a stream to hold the image data and read the image
        # data from the connection
        image_stream = io.BytesIO()
        image_stream.write(connection.read(image_len))
        # Rewind the stream, open it as an image with opencv and do some
        # processing on it
        image_stream.seek(0)
        image = Image.open(image_stream)

        data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
        imagedisp = cv2.imdecode(data, 1)

        cv2.imshow("Frame",imagedisp)
        cv2.waitKey(1)  #imshow will not output an image if you do not use waitKey
        cv2.destroyAllWindows() #cleanup windows 
finally:
    connection.close()
    server_socket.close()

这个解决方案与我在原问题中提到的视频有相似的结果。更大分辨率的帧会增加视频流的延迟,但对于我的应用程序来说是可以容忍的。
首先您需要运行processStream.py,然后在树莓派上执行createStream.py。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接