树莓派实时视频流传输和叠加显示

5

我正在尝试各种树莓派相机的视频流选项。目前最佳的解决方案是将数据从 raspivid 通过管道传输到 nc,然后再传输到 mplayer,这样可以实现最低延迟。

在树莓派上:

/opt/vc/bin/raspivid -t 0 -hf -vf -w 640 -h 480 --nopreview -o - | nc -l 5000

在客户端机器上(使用-fps 60技巧跳过缓冲)

nc $RASP_IP 5000 | mplayer -nosound -framedrop -x 640 -y 480 -fps 60 -demuxer +h264es -cache 1024 -

这项工作几乎没有延迟,效果非常好。

现在我想将一些动态数据叠加到视频上。最好的方法是什么?

我看过一些解决方案,比如直接编辑raspivid并添加opencv,但在我的情况下这不起作用,因为显示器必须在与连接到相机的计算机不同的计算机上。

技术(语言/库)并不是那么重要,除非它可以在*nix上运行(.NET不是一个选择)。


这是一个使用pygame显示视频并提供叠加的示例。https://learn.adafruit.com/diy-wifi-raspberry-pi-touch-cam/overview - brentlance
1个回答

0

这里是我使用的一些命令,用于使用组播在网络上流式传输Pi相机,并在图像底部写入时间和相机名称。 我在每行之间放置了注释以解释其作用,但命令本身必须全部在同一行上,没有任何注释才能实际工作(一行以v4l2-ctl开头,另一行以ffmpeg开头)。

# Set up the camera device for dynamic framerate (to get better images in
# low light) and for uncompressed video output.
v4l2-ctl
  -d /dev/video0
  --set-ctrl=exposure_dynamic_framerate=1
  --set-ctrl=scene_mode=8
  -v width=1296,height=960,pixelformat=YU12

# Run ffmpeg.
ffmpeg

  # Select the frame size.  This is for the Pi camera v1 and must match the
  # size set above in v4l2-ctl.
  -f rawvideo -pix_fmt yuv420p -video_size 1296x960

  # Use the system clock because the camera stream doesn't have timestamps.
  -use_wallclock_as_timestamps 1

  # Select the first camera.  This could change if you have multiple
  # cameras connected (e.g. via USB)
  -i /dev/video0

  # Set the framerate in the output H264 stream.  This value is double
  # the actual framerate, so 30 means 15 fps.
  -bsf h264_metadata=tick_rate=30

  # Use a video filter
  -vf '

    # Make the video higher by 37 pixels (32 for the words and 5 for padding)
    pad=h=(in_h+5+32),

    # Write the time at the bottom right.
    drawtext=x=(w-tw-8):y=(h-28):fontcolor=white:fontsize=28:text=%{localtime},

    # Write the hostname on the bottom left, which we put in the file
    # /tmp/cam_hostname before running this command.
    drawtext=x=8:y=(h-28):fontcolor=white:fontsize=28:textfile=/tmp/cam_hostname,

    # Write another message from /tmp/cam_msg after the hostname.  This
    # file is read each frame, so it can contain live data such as the
    # temperature or motion sensor status.  The file contents must be
    # written by another program.
    drawtext=x=32:y=(h-28):fontcolor=white:fontsize=28:textfile=/tmp/cam_msg:reload=1

  # End of -vf parameter
  '

  # Duplicate or drop frames as needed to ensure a constant output framerate.
  # This is because the Pi camera is set to a dynamic framerate, so at night
  # the framerate will drop to give a brighter image.  This option ensures
  # that whatever the camera framerate, the output H264 framerate will be
  # constant.
  -vsync 1

  # Specify the output framerate.  We use 15 fps because that's the camera's
  # maximum in high resolution mode.
  -r 15

  # Use the hardware H264 encoder via the V4L2 M2M interface.
  -c:v h264_v4l2m2m

  # Set the bitrate of the output video.
  -b:v 5M

  # No audio.
  -an

  # Adjust the output H264 bitstream so the timestamps run at the correct
  # rate.  This is required so if the stream is recorded, media players will
  # seek accurately (e.g. seeking 10 seconds forward will go forward 10
  # seconds).  Without this, seeking forward by 10 seconds could jump forward
  # by a minute or more, making it difficult to seek around in the
  # recorded footage.
  -bsf 'setts=ts=N*(1/15)*100000,h264_metadata=tick_rate=30'

  # Output the final stream via RTP to a multicast IP address.
  -f rtp_mpegts udp://239.0.0.3:5004

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接