我正在努力将音频捕获并流式传输到RTMP服务器。我在MacOS下工作(使用Xcode),因此为了捕获音频样本缓冲区,我使用AVFoundation框架。但是为了编码和流式传输,我需要使用ffmpeg-API和libfaac编码器。因此输出格式必须为AAC(以支持iOS设备上的流播放)。
我遇到了这样的问题:音频捕获设备(在我的情况下是Logitech相机)会给我带有512个LPCM样本的样本缓冲区,并且我可以从16000、24000、36000或48000 Hz中选择输入采样率。当我将这些512个样本提供给已配置适当采样率的AAC编码器时,我听到了慢动作和抽搐的音频声音(似乎每个帧之后都有一小段静默)。
我发现(也许我错了),libfaac编码器只接受1024个采样的音频帧。当我将输入采样率设置为24000并在编码之前将输入采样缓冲区重新采样为48000时,我获得了1024个重新采样的样本。将这些1024个样本编码为AAC后,我可以在输出上听到适当的声音。但是我的网络摄像头为任何输入采样率在缓冲区中产生512个样本,而输出采样率必须为48000 Hz。因此,无论如何都需要进行重新采样,并且在重新采样后不会准确获得1024个样本。 是否有一种方法可以在ffmpeg-API功能范围内解决此问题? 非常感谢任何帮助。
PS:我想我可以累积重新采样的缓冲区,直到样本计数达到1024,然后对其进行编码,但这是流,因此会出现时间戳和其他输入设备的问题,此类解决方案不适用。
当前问题源于[问题]中描述的问题:如何使用CMSampleBufferRef(AVFoundation)获取的数据填充音频AVFrame(ffmpeg)?
我遇到了这样的问题:音频捕获设备(在我的情况下是Logitech相机)会给我带有512个LPCM样本的样本缓冲区,并且我可以从16000、24000、36000或48000 Hz中选择输入采样率。当我将这些512个样本提供给已配置适当采样率的AAC编码器时,我听到了慢动作和抽搐的音频声音(似乎每个帧之后都有一小段静默)。
我发现(也许我错了),libfaac编码器只接受1024个采样的音频帧。当我将输入采样率设置为24000并在编码之前将输入采样缓冲区重新采样为48000时,我获得了1024个重新采样的样本。将这些1024个样本编码为AAC后,我可以在输出上听到适当的声音。但是我的网络摄像头为任何输入采样率在缓冲区中产生512个样本,而输出采样率必须为48000 Hz。因此,无论如何都需要进行重新采样,并且在重新采样后不会准确获得1024个样本。 是否有一种方法可以在ffmpeg-API功能范围内解决此问题? 非常感谢任何帮助。
PS:我想我可以累积重新采样的缓冲区,直到样本计数达到1024,然后对其进行编码,但这是流,因此会出现时间戳和其他输入设备的问题,此类解决方案不适用。
当前问题源于[问题]中描述的问题:如何使用CMSampleBufferRef(AVFoundation)获取的数据填充音频AVFrame(ffmpeg)?
这里有一个带有音频编解码器配置的代码(也有视频流,但视频工作正常):
/*global variables*/
static AVFrame *aframe;
static AVFrame *frame;
AVOutputFormat *fmt;
AVFormatContext *oc;
AVStream *audio_st, *video_st;
Init ()
{
AVCodec *audio_codec, *video_codec;
int ret;
avcodec_register_all();
av_register_all();
avformat_network_init();
avformat_alloc_output_context2(&oc, NULL, "flv", filename);
fmt = oc->oformat;
oc->oformat->video_codec = AV_CODEC_ID_H264;
oc->oformat->audio_codec = AV_CODEC_ID_AAC;
video_st = NULL;
audio_st = NULL;
if (fmt->video_codec != AV_CODEC_ID_NONE)
{ //… /*init video codec*/}
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
audio_codec= avcodec_find_encoder(fmt->audio_codec);
if (!(audio_codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(fmt->audio_codec));
exit(1);
}
audio_st= avformat_new_stream(oc, audio_codec);
if (!audio_st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
audio_st->id = oc->nb_streams-1;
//AAC:
audio_st->codec->sample_fmt = AV_SAMPLE_FMT_S16;
audio_st->codec->bit_rate = 32000;
audio_st->codec->sample_rate = 48000;
audio_st->codec->profile=FF_PROFILE_AAC_LOW;
audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };
audio_st->codec->channels = 1;
audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
if (video_st)
{
// …
/*prepare video*/
}
if (audio_st)
{
aframe = avcodec_alloc_frame();
if (!aframe) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
}
AVCodecContext *c;
int ret;
c = audio_st->codec;
ret = avcodec_open2(c, audio_codec, 0);
if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1);
}
//…
}
重新采样和编码音频:
if (mType == kCMMediaType_Audio)
{
CMSampleTimingInfo timing_info;
CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timing_info);
double pts=0;
double dts=0;
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
int got_packet, ret;
av_init_packet(&pkt);
c = audio_st->codec;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));
SwrContext *swr = swr_alloc();
int in_smprt = (int)audioDescription->mSampleRate;
av_opt_set_int(swr, "in_channel_layout", AV_CH_LAYOUT_MONO, 0);
av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout, 0);
av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame, 0);
av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels, 0);
av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout, 0);
av_opt_set_int(swr, "in_sample_rate", audioDescription->mSampleRate,0);
av_opt_set_int(swr, "out_sample_rate", audio_st->codec->sample_rate,0);
av_opt_set_sample_fmt(swr, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt, 0);
swr_init(swr);
uint8_t **input = NULL;
int src_linesize;
int in_samples = (int)numSamples;
ret = av_samples_alloc_array_and_samples(&input, &src_linesize, audioDescription->mChannelsPerFrame,
in_samples, AV_SAMPLE_FMT_S16P, 0);
*input=(uint8_t*)samples;
uint8_t *output=NULL;
int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) +in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);
av_samples_alloc(&output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);
in_samples = (int)numSamples;
out_samples = swr_convert(swr, &output, out_samples, (const uint8_t **)input, in_samples);
aframe->nb_samples =(int) out_samples;
ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,
(uint8_t *)output,
(int) out_samples *
av_get_bytes_per_sample(audio_st->codec->sample_fmt) *
audio_st->codec->channels, 1);
aframe->channel_layout = audio_st->codec->channel_layout;
aframe->channels=audio_st->codec->channels;
aframe->sample_rate= audio_st->codec->sample_rate;
if (timing_info.presentationTimeStamp.timescale!=0)
pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;
aframe->pts=pts*audio_st->time_base.den;
aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);
ret = avcodec_encode_audio2(c, &pkt, aframe, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1);
}
swr_free(&swr);
if (got_packet)
{
pkt.stream_index = audio_st->index;
pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);
pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);
// Write the compressed frame to the media file.
ret = av_interleaved_write_frame(oc, &pkt);
if (ret != 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
}