我已经实现了一个类,它可以生成一个用于读取和排队帧的线程,主线程通过OpenGL显示这些帧。我尝试在将图像数据绑定到OpenGL纹理后释放分配的内存,但似乎有些内存没有被正确释放。内存使用量不断增长,直到系统耗尽内存,最终帧读取器线程由于内存分配失败而无法抓取新帧。请问有人能帮我找出可能遗漏的问题吗?谢谢。
以下是帧读取器线程的代码:
以下是帧读取器线程的代码:
void AVIReader::frameReaderThreadFunc()
{
AVPacket packet;
while (readFrames) {
// Allocate necessary memory
AVFrame* pFrame = av_frame_alloc();
if (pFrame == nullptr)
{
continue;
}
AVFrame* pFrameRGB = av_frame_alloc();
if (pFrameRGB == nullptr)
{
av_frame_free(&pFrame);
continue;
}
// Determine required buffer size and allocate buffer
int numBytes = avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
if (buffer == nullptr)
{
av_frame_free(&pFrame);
av_frame_free(&pFrameRGB);
continue;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
if (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
VideoFrame vf;
vf.frame = pFrameRGB;
vf.pts = av_frame_get_best_effort_timestamp(pFrame) * time_base;
frameQueue.enqueue(vf);
av_frame_unref(pFrame);
av_frame_free(&pFrame);
}
}
//av_packet_unref(&packet);
av_free_packet(&packet);
}
}
}
这是提取排队帧并将其绑定到OpenGL纹理的代码。我明确保存先前的帧,直到我用下一个帧替换它。否则,似乎会导致段错误。
void AVIReader::GrabAVIFrame()
{
if (curFrame.pts >= clock_pts)
{
return;
}
if (frameQueue.empty())
return;
// Get a packet from the queue
VideoFrame videoFrame = frameQueue.top();
while (!frameQueue.empty() && frameQueue.top().pts < clock_pts)
{
videoFrame = frameQueue.dequeue();
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, videoFrame.frame->data[0]);
// release previous frame
if (curFrame.frame)
{
av_free(curFrame.frame->data[0]);
}
av_frame_unref(curFrame.frame);
// set current frame to new frame
curFrame = videoFrame;
}
frameQueue 是一个线程安全的优先队列,它保存了 VideoFrame 的定义:
class VideoFrame {
public:
AVFrame* frame;
double pts;
};
更新:在设置当前帧为新帧的顺序上出现了一个愚蠢的错误。我忘记在尝试一些东西后将其切换回来。我还采纳了@ivan_onys的建议,但似乎并没有解决问题。
更新2:我接受了@Al Bundy的建议,无条件释放pFrame和packet,但问题仍然存在。
由于缓冲区是包含实际图像数据的地方,需要在glTexSubImage2D()中使用它,因此我不能在显示屏幕上的图像之前释放它(否则会导致段错误)。 avpicture_fill()将frame->data[0] = buffer,所以我认为在贴图新帧后调用av_free(curFrame.frame->data[0]);以释放分配的缓冲区应该可以解决问题。
这是更新后的帧读取线程代码:
void AVIReader::frameReaderThreadFunc()
{
AVPacket packet;
while (readFrames) {
// Allocate necessary memory
AVFrame* pFrame = av_frame_alloc();
if (pFrame == nullptr)
{
continue;
}
AVFrame* pFrameRGB = av_frame_alloc();
if (pFrameRGB == nullptr)
{
av_frame_free(&pFrame);
continue;
}
// Determine required buffer size and allocate buffer
int numBytes = avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
if (buffer == nullptr)
{
av_frame_free(&pFrame);
av_frame_free(&pFrameRGB);
continue;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
if (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished) {
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
VideoFrame vf;
vf.frame = pFrameRGB;
vf.pts = av_frame_get_best_effort_timestamp(pFrame) * time_base;
frameQueue.enqueue(vf);
}
}
}
av_frame_unref(pFrame);
av_frame_free(&pFrame);
av_packet_unref(&packet);
av_free_packet(&packet);
}
}
解决方案:事实证明,泄漏是在数据包来自非视频流(例如音频)时发生的。我还需要在GrabAVIFrame()的while循环中跳过的帧上释放资源。
std::vector<uint8_t*>
来保存buffer
的分配指针,并在完成后遍历向量并释放所有指针,这样怎么样?我从未使用过ffmpeg
。 - Peter VARGA