AVAudioEngine实时频率调制

3
我希望能够实时修改输入信号并将其发送到iOS设备扬声器。我已经阅读了AVAudioEngine可以用于此类任务,但是找不到我想要实现的文档或示例。
为了测试,我已经做了以下内容:
audioEngine = AVAudioEngine()

let unitEffect = AVAudioUnitReverb()
unitEffect.wetDryMix = 50

audioEngine.attach(unitEffect)

audioEngine.connect(audioEngine.inputNode, to: unitEffect, format: nil)
audioEngine.connect(unitEffect, to: audioEngine.outputNode, format: nil)

audioEngine.prepare()

如果按下一个按钮,我只需要执行以下操作:
do {
    try audioEngine.start()
} catch {
    print(error)
}

或者audioEngine.stop()

混响效果被应用到信号上,我能听到它的效果。现在我想要摆脱混响,并且:

  1. 调制输入信号,例如反转信号、调制频率等。是否有一种可以使用的效果集合或者通过数学方法调制频率的可能性?
  2. 在iOS设备上启动时,我会得到混响效果,但输出仅在顶部电话扬声器而不是底部的大声扬声器。如何更改?
1个回答

2
这个 Github 仓库确实可以解决问题:https://github.com/dave234/AppleAudioUnit。只需从这里将 BufferedAudioUnit 添加到您的项目中,并像以下示例那样以您的实现进行子类化:
AudioProcessingUnit.h:
#import "BufferedAudioUnit.h"

@interface AudioProcessingUnit : BufferedAudioUnit

@end

AudioProcessingUnit.m:

#import "AudioProcessingUnit.h"

@implementation AudioProcessingUnit

-(ProcessEventsBlock)processEventsBlock:(AVAudioFormat *)format {

    return ^(AudioBufferList       *inBuffer,
             AudioBufferList       *outBuffer,
             const AudioTimeStamp  *timestamp,
             AVAudioFrameCount     frameCount,
             const AURenderEvent   *realtimeEventListHead) {

        for (int i = 0; i < inBuffer->mNumberBuffers; i++) {

            float *buffer = inBuffer->mBuffers[i].mData;
            for (int j = 0; j < inBuffer->mBuffers[i].mDataByteSize; j++) {
                buffer[j] = /*process it here*/;
            }

            memcpy(outBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mDataByteSize);
        }
    };
}

@end

而且,在你的AVAudioEngine设置中:



let audioComponentDescription = AudioComponentDescription(
            componentType: kAudioUnitType_Effect,
            componentSubType: kAudioUnitSubType_VoiceProcessingIO,
            componentManufacturer: 0x0,
            componentFlags: 0,
            componentFlagsMask: 0
        );

        AUAudioUnit.registerSubclass(
            AudioProcessingUnit.self,
            as: audioComponentDescription,
            name: "AudioProcessingUnit",
            version: 1
        )

        AVAudioUnit.instantiate(
            with: audioComponentDescription,
            options: .init(rawValue: 0)
        ) { (audioUnit, error) in
            guard let audioUnit = audioUnit else {
                NSLog("Audio unit is NULL")
                return
            }

            let formatHardwareInput = self.engine.inputNode.inputFormat(forBus: 0)

            self.engine.attach(audioUnit)
            self.engine.connect(
                self.engine.inputNode,
                to: audioUnit,
                format: formatHardwareInput
            )
            self.engine.connect(
                audioUnit,
                to: self.engine.outputNode,
                format: formatHardwareInput
            )
        }

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接