如何在iOS中将AAC音频缓冲区解码为PCM缓冲区?

3

我想在iOS中将AAC音频解码为PCM音频,有什么最好的方法吗?任何示例代码都会非常有帮助...是否有简单的API可以实现这一点..?


嘿,你完成了吗?谢谢。 - Pablo Martinez
2个回答

13

我有示例代码可以完成这个任务。

首先,您应该配置输入输出ASBD(AudioStreamBasicDescription),然后创建转换器:

- (void)setupAudioConverter{
    AudioStreamBasicDescription outFormat;
    memset(&outFormat, 0, sizeof(outFormat));
    outFormat.mSampleRate       = 44100;
    outFormat.mFormatID         = kAudioFormatLinearPCM;
    outFormat.mFormatFlags      = kLinearPCMFormatFlagIsSignedInteger;
    outFormat.mBytesPerPacket   = 2;
    outFormat.mFramesPerPacket  = 1;
    outFormat.mBytesPerFrame    = 2;
    outFormat.mChannelsPerFrame = 1;
    outFormat.mBitsPerChannel   = 16;
    outFormat.mReserved         = 0;


    AudioStreamBasicDescription inFormat;
    memset(&inFormat, 0, sizeof(inFormat));
    inFormat.mSampleRate        = 44100;
    inFormat.mFormatID          = kAudioFormatMPEG4AAC;
    inFormat.mFormatFlags       = kMPEG4Object_AAC_LC;
    inFormat.mBytesPerPacket    = 0;
    inFormat.mFramesPerPacket   = 1024;
    inFormat.mBytesPerFrame     = 0;
    inFormat.mChannelsPerFrame  = 1;
    inFormat.mBitsPerChannel    = 0;
    inFormat.mReserved          = 0;

    OSStatus status =  AudioConverterNew(&inFormat, &outFormat, &_audioConverter);
    if (status != 0) {
        printf("setup converter error, status: %i\n", (int)status);
    }
}

然后,您应该为音频转换器创建回调函数:

struct PassthroughUserData {
    UInt32 mChannels;
    UInt32 mDataSize;
    const void* mData;
    AudioStreamPacketDescription mPacket;
};


OSStatus inInputDataProc(AudioConverterRef aAudioConverter,
                         UInt32* aNumDataPackets /* in/out */,
                         AudioBufferList* aData /* in/out */,
                         AudioStreamPacketDescription** aPacketDesc,
                         void* aUserData)
{

    PassthroughUserData* userData = (PassthroughUserData*)aUserData;
    if (!userData->mDataSize) {
        *aNumDataPackets = 0;
        return kNoMoreDataErr;
    }

    if (aPacketDesc) {
        userData->mPacket.mStartOffset = 0;
        userData->mPacket.mVariableFramesInPacket = 0;
        userData->mPacket.mDataByteSize = userData->mDataSize;
        *aPacketDesc = &userData->mPacket;
    }

    aData->mBuffers[0].mNumberChannels = userData->mChannels;
    aData->mBuffers[0].mDataByteSize = userData->mDataSize;
    aData->mBuffers[0].mData = const_cast<void*>(userData->mData);

    // No more data to provide following this run.
    userData->mDataSize = 0;

    return noErr;
}

帧解码的方法:

- (void)decodeAudioFrame:(NSData *)frame withPts:(NSInteger)pts{
    if(!_audioConverter){
        [self setupAudioConverter];
    }

    PassthroughUserData userData = { 1, (UInt32)frame.length, [frame bytes]};
    NSMutableData *decodedData = [NSMutableData new];

    const uint32_t MAX_AUDIO_FRAMES = 128;
    const uint32_t maxDecodedSamples = MAX_AUDIO_FRAMES * 1;

    do{
        uint8_t *buffer = (uint8_t *)malloc(maxDecodedSamples * sizeof(short int));
        AudioBufferList decBuffer;
        decBuffer.mNumberBuffers = 1;
        decBuffer.mBuffers[0].mNumberChannels = 1;
        decBuffer.mBuffers[0].mDataByteSize = maxDecodedSamples * sizeof(short int);
        decBuffer.mBuffers[0].mData = buffer;

        UInt32 numFrames = MAX_AUDIO_FRAMES;

        AudioStreamPacketDescription outPacketDescription;
        memset(&outPacketDescription, 0, sizeof(AudioStreamPacketDescription));
        outPacketDescription.mDataByteSize = MAX_AUDIO_FRAMES;
        outPacketDescription.mStartOffset = 0;
        outPacketDescription.mVariableFramesInPacket = 0;

        OSStatus rv = AudioConverterFillComplexBuffer(_audioConverter,
                                                      inInputDataProc,
                                                      &userData,
                                                      &numFrames /* in/out */,
                                                      &decBuffer,
                                                      &outPacketDescription);

        if (rv && rv != kNoMoreDataErr) {
            NSLog(@"Error decoding audio stream: %d\n", rv);
            break;
        }

        if (numFrames) {
            [decodedData appendBytes:decBuffer.mBuffers[0].mData length:decBuffer.mBuffers[0].mDataByteSize];
        }

        if (rv == kNoMoreDataErr) {
            break;
        }

    }while (true);

    //void *pData = (void *)[decodedData bytes];
    //audioRenderer->Render(&pData, decodedData.length, pts);
}

1
如果AAC数据有ADTS头,则需要跳过该头。 - djunod
我遇到了"kAudioConverterErr_RequiresPacketDescriptionsError"的问题,请问有什么帮助吗? - Bhuvanendra Pratap Maurya
如果我放入2个AAC CPE数据包,第二个数据包将不会被处理。我无法找到第一个数据包已经处理了多少字节。 - user1418067
我不明白frame是怎么来的?我目前正在读取aac文件以获取数据,但是使用这种方法会出现错误。 - undefined

0

1
有相当多的乏味代码需要编写,但是让我推荐一本书《学习核心音频》(Chris Adamson和Kevin Avila合著)。他们的示例代码在GitHub上:https://github.com/abbood/Learning-Core-Audio-Book-Code-Sample - user3821934

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接