iOS通过AVAssetWriter反转音频

5

我正在尝试使用AVAsset和AVAssetWriter在iOS中反转音频。 以下代码有效,但输出文件比输入文件短。 例如,输入文件的持续时间为1:59,但输出文件的持续时间为1:50,且具有相同的音频内容。

- (void)reverse:(AVAsset *)asset
{
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];

AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
                     forKey:AVFormatIDKey];

AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];

NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                [NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
                                [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                                [NSNumber numberWithInt:128000], AVEncoderBitRateKey,
                                [NSData data], AVChannelLayoutKey,
                                nil];

AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
                                                                 outputSettings:outputSettings];

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.m4a"];

NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
                                                  fileType:AVFileTypeAppleM4A
                                                     error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];

CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];

while (sample != NULL) {

    sample = [readerOutput copyNextSampleBuffer];

    if (sample == NULL)
        continue;

    [samples addObject:(__bridge id)(sample)];
    CFRelease(sample);
}

NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];

for (id reversedSample in reversedSamples) {
    if (writerInput.readyForMoreMediaData)  {
        [writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
    }
    else {
        [NSThread sleepForTimeInterval:0.05];
    }
}

[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
    [writer finishWriting];
});
}

更新:

如果我在第一个while循环中直接写入样本,则一切正常(即使检查writerInput.readyForMoreMediaData也是如此)。在这种情况下,结果文件的持续时间与原始文件完全相同。但是,如果我从反转的NSArray中写入相同的样本,则结果会缩短。


所有样本都在那里吗?即音频是否被压缩或截断(缺少样本)...在这两种情况下,输出都比输入短。 - ruoho ruotsi
2
这真的有效吗?我正在尝试使用视频相同的代码,但似乎时间是内置在CMSampleBufferRef中的。因此,即使您以相反的顺序附加帧,它仍然正常播放。 - Andy Hin
它适用于 .m4a 音频。 - Sasha
@sx00,你是怎么解决这个问题的?我在尝试这段代码时也遇到了同样的问题。当声音被反转时,它似乎会跳过一些部分。就像你说的那样,如果我不反转它,那么声音听起来完美无缺,但是当我反转数组时,声音就不对了。 NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects]; - Sam B
4个回答

5

这里描述的方法是在一个Xcode项目中实现的,链接如下(多平台SwiftUI应用程序):

ReverseAudio Xcode Project

仅仅将音频样本按相反顺序写入是不够的。样本数据本身也需要被反转。

在Swift中,我们创建了一个AVAsset扩展。

这些样本必须作为解压缩样本进行处理。为此,使用kAudioFormatLinearPCM创建音频读取器设置:

let kAudioReaderSettings = [
    AVFormatIDKey: Int(kAudioFormatLinearPCM) as AnyObject,
    AVLinearPCMBitDepthKey: 16 as AnyObject,
    AVLinearPCMIsBigEndianKey: false as AnyObject,
    AVLinearPCMIsFloatKey: false as AnyObject,
    AVLinearPCMIsNonInterleaved: false as AnyObject]

使用我们的 AVAsset 扩展方法 audioReader:
func audioReader(outputSettings: [String : Any]?) -> (audioTrack:AVAssetTrack?, audioReader:AVAssetReader?, audioReaderOutput:AVAssetReaderTrackOutput?) {
    
    if let audioTrack = self.tracks(withMediaType: .audio).first {
        if let audioReader = try? AVAssetReader(asset: self)  {
            let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
            return (audioTrack, audioReader, audioReaderOutput)
        }
    }
    
    return (nil, nil, nil)
}

let (_, audioReader, audioReaderOutput) = self.audioReader(outputSettings: kAudioReaderSettings)

创建一个音频阅读器(AVAssetReader)和音频阅读器输出(AVAssetReaderTrackOutput)以读取音频样本。
我们需要跟踪音频样本:
var audioSamples:[CMSampleBuffer] = []

现在开始阅读样例。
if audioReader.startReading() {
    while audioReader.status == .reading {
        if let sampleBuffer = audioReaderOutput.copyNextSampleBuffer(){ 
           // process sample                                       
        }
    }
}

保存音频样本缓冲区,稍后我们创建反转样本时需要用到它:

audioSamples.append(sampleBuffer)

我们需要一个AVAssetWriter:
guard let assetWriter = try? AVAssetWriter(outputURL: destinationURL, fileType: AVFileType.wav) else {
    // error handling
    return
}

文件类型为'wav',因为反转的样本将被写入未压缩的音频格式线性PCM,如下所示。 对于assetWriter,我们指定音频压缩设置和“源格式提示”,可以从未压缩的采样缓冲区获取此信息。
let sampleBuffer = audioSamples[0]
let sourceFormat = CMSampleBufferGetFormatDescription(sampleBuffer)

let audioCompressionSettings = [AVFormatIDKey: kAudioFormatLinearPCM] as [String : Any]

现在我们可以创建AVAssetWriterInput,将其添加到写入器并开始写入:
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings:audioCompressionSettings, sourceFormatHint: sourceFormat)

assetWriter.add(assetWriterInput)

assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)

现在逆向迭代样本,并对每个样本本身进行反转。我们有一个名为“reverse”的CMSampleBuffer扩展程序,可以完成此操作。使用requestMediaDataWhenReady,我们可以按以下方式执行此操作:
let nbrSamples = audioSamples.count
var index = 0

let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
    
assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
        
    while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
        let sampleBuffer = audioSamples[nbrSamples - 1 - index]
            
        if let reversedBuffer = sampleBuffer.reverse(), assetWriterInput.append(reversedBuffer) == true {
            index += 1
        }
        else {
            index = nbrSamples
        }
            
        if index == nbrSamples {
            assetWriterInput.markAsFinished()
            
            finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
        }
    }
}

所以最后需要解释的是如何在“reverse”方法中反转音频样本?
我们创建了一个扩展CMSampleBuffer的扩展,它接受一个样本缓冲区并返回反转的样本缓冲区:
func reverse() -> CMSampleBuffer? 

需要翻转的数据需要使用以下方法获取:
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer

CMSampleBuffer的头文件描述了这种方法如下:

“创建一个包含CMSampleBuffer中数据的AudioBufferList和一个引用(并管理其中数据的生命周期的)该AudioBufferList中数据的CMBlockBuffer。”

调用方法如下,其中'self'是我们要反转的CMSampleBuffer,因为这是一个扩展:

var blockBuffer: CMBlockBuffer? = nil
let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)

CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
    self,
    bufferListSizeNeededOut: nil,
    bufferListOut: audioBufferList.unsafeMutablePointer,
    bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
    blockBufferAllocator: nil,
    blockBufferMemoryAllocator: nil,
    flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
    blockBufferOut: &blockBuffer
 )

现在您可以访问原始数据,如下所示:
let data: UnsafeMutableRawPointer = audioBufferList.unsafePointer.pointee.mBuffers.mData

我们需要将数据反转以便访问数据作为一个名为sampleArray的“样本”数组,用Swift实现如下:
let samples = data.assumingMemoryBound(to: Int16.self)
        
let sizeofInt16 = MemoryLayout<Int16>.size
let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize  

let dataCount = Int(dataSize) / sizeofInt16
        
var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]

现在将数组 sampleArray 反转:
sampleArray.reverse()

使用反转后的样本,我们创建一个包含反转样本的新CMSampleBuffer。
现在,我们使用CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer替换先前获取的CMBlockBuffer中的数据:
首先使用反转数组重新分配“samples”:
var status:OSStatus = noErr
        
sampleArray.withUnsafeBytes { sampleArrayPtr in
    if let baseAddress = sampleArrayPtr.baseAddress {
        let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
        let rawPtr = UnsafeRawPointer(bufferPointer)
                
        status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
    } 
}

if status != noErr {
    return nil
}

最后使用CMSampleBufferCreate创建新的示例缓冲区。这个函数需要两个参数,我们可以从原始示例缓冲区中获取,即formatDescription和numberOfSamples:
let formatDescription = CMSampleBufferGetFormatDescription(self)   
let numberOfSamples = CMSampleBufferGetNumSamples(self)
        
var newBuffer:CMSampleBuffer?
        

现在使用反转后的blockBuffer创建新的示例缓冲区:
guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
    return self
}
        
return newBuffer

这就是全部内容!

最后需要注意的是,核心音频和AVFoundation头文件提供了许多有用的信息,例如CoreAudioTypes.h、CMSampleBuffer.h等等。


有人制作了上述示例的任何样例吗?拜托了? - AsifHabib
自从Xcode 11.4版本以来,对于之前的代码“samples = UnsafeMutablePointer(mutating: sampleArray)”,会出现警告信息“Initialization of 'UnsafeMutablePointer<Int16>' results in a dangling pointer”,这可能导致崩溃。请改用“sampleArray.withUnsafeBytes”。 - Joe Pagliaro
添加了一个请求的示例项目链接。该示例应用程序通过反转已反转的音频来测试,应该能够得到原始音频。 - Joe Pagliaro

4

使用Swift 5进行反向视频和音频的完整示例,输出到同一资产,音频根据上述建议进行处理:

 private func reverseVideo(inURL: URL, outURL: URL, queue: DispatchQueue, _ completionBlock: ((Bool)->Void)?) {
    Log.info("Start reverse video!")
    let asset = AVAsset.init(url: inURL)
    guard
        let reader = try? AVAssetReader.init(asset: asset),
        let videoTrack = asset.tracks(withMediaType: .video).first,
        let audioTrack = asset.tracks(withMediaType: .audio).first

        else {
            assert(false)
            completionBlock?(false)
            return
    }

    let width = videoTrack.naturalSize.width
    let height = videoTrack.naturalSize.height

    // Video reader
    let readerVideoSettings: [String : Any] = [ String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,]
    let readerVideoOutput = AVAssetReaderTrackOutput.init(track: videoTrack, outputSettings: readerVideoSettings)
    reader.add(readerVideoOutput)

    // Audio reader
    let readerAudioSettings: [String : Any] = [
        AVFormatIDKey: kAudioFormatLinearPCM,
        AVLinearPCMBitDepthKey: 16 ,
        AVLinearPCMIsBigEndianKey: false ,
        AVLinearPCMIsFloatKey: false,]
    let readerAudioOutput = AVAssetReaderTrackOutput.init(track: audioTrack, outputSettings: readerAudioSettings)
    reader.add(readerAudioOutput)

    //Start reading content
    reader.startReading()

    //Reading video samples
    var videoBuffers = [CMSampleBuffer]()
    while let nextBuffer = readerVideoOutput.copyNextSampleBuffer() {
        videoBuffers.append(nextBuffer)
    }

    //Reading audio samples
    var audioBuffers = [CMSampleBuffer]()
    var timingInfos = [CMSampleTimingInfo]()
    while let nextBuffer = readerAudioOutput.copyNextSampleBuffer() {

        var timingInfo = CMSampleTimingInfo()
        var timingInfoCount = CMItemCount()
        CMSampleBufferGetSampleTimingInfoArray(nextBuffer, entryCount: 0, arrayToFill: &timingInfo, entriesNeededOut: &timingInfoCount)

        let duration = CMSampleBufferGetDuration(nextBuffer)
        let endTime = CMTimeAdd(timingInfo.presentationTimeStamp, duration)
        let newPresentationTime = CMTimeSubtract(duration, endTime)

        timingInfo.presentationTimeStamp = newPresentationTime

        timingInfos.append(timingInfo)
        audioBuffers.append(nextBuffer)
    }

    //Stop reading
    let status = reader.status
    reader.cancelReading()
    guard status == .completed, let firstVideoBuffer = videoBuffers.first, let firstAudioBuffer = audioBuffers.first else {
        assert(false)
        completionBlock?(false)
        return
    }

    //Start video time
    let sessionStartTime = CMSampleBufferGetPresentationTimeStamp(firstVideoBuffer)

    //Writer for video
    let writerVideoSettings: [String:Any] = [
        AVVideoCodecKey : AVVideoCodecType.h264,
        AVVideoWidthKey : width,
        AVVideoHeightKey: height,
    ]
    let writerVideoInput: AVAssetWriterInput
    if let formatDescription = videoTrack.formatDescriptions.last {
        writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings, sourceFormatHint: (formatDescription as! CMFormatDescription))
    } else {
        writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings)
    }
    writerVideoInput.transform = videoTrack.preferredTransform
    writerVideoInput.expectsMediaDataInRealTime = false

    //Writer for audio
    let writerAudioSettings: [String:Any] = [
        AVFormatIDKey : kAudioFormatMPEG4AAC,
        AVSampleRateKey : 44100,
        AVNumberOfChannelsKey: 2,
        AVEncoderBitRateKey:128000,
        AVChannelLayoutKey: NSData(),
    ]
    let sourceFormat = CMSampleBufferGetFormatDescription(firstAudioBuffer)
    let writerAudioInput: AVAssetWriterInput = AVAssetWriterInput.init(mediaType: .audio, outputSettings: writerAudioSettings, sourceFormatHint: sourceFormat)
    writerAudioInput.expectsMediaDataInRealTime = true

    guard
        let writer = try? AVAssetWriter.init(url: outURL, fileType: .mp4),
        writer.canAdd(writerVideoInput),
        writer.canAdd(writerAudioInput)
        else {
            assert(false)
            completionBlock?(false)
            return
    }

    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor.init(assetWriterInput: writerVideoInput, sourcePixelBufferAttributes: nil)
    let group = DispatchGroup.init()

    group.enter()
    writer.add(writerVideoInput)
    writer.add(writerAudioInput)
    writer.startWriting()
    writer.startSession(atSourceTime: sessionStartTime)

    var videoFinished = false
    var audioFinished = false

    //Write video samples in reverse order
    var currentSample = 0
    writerVideoInput.requestMediaDataWhenReady(on: queue) {
        for i in currentSample..<videoBuffers.count {
            currentSample = i
            if !writerVideoInput.isReadyForMoreMediaData {
                return
            }
            let presentationTime = CMSampleBufferGetPresentationTimeStamp(videoBuffers[i])
            guard let imageBuffer = CMSampleBufferGetImageBuffer(videoBuffers[videoBuffers.count - i - 1]) else {
                Log.info("VideoWriter reverseVideo: warning, could not get imageBuffer from SampleBuffer...")
                continue
            }
            if !pixelBufferAdaptor.append(imageBuffer, withPresentationTime: presentationTime) {
                Log.info("VideoWriter reverseVideo: warning, could not append imageBuffer...")
            }
        }

        // finish write video samples
        writerVideoInput.markAsFinished()
        Log.info("Video writing finished!")
        videoFinished = true
        if(audioFinished){
            group.leave()
        }
    }
    //Write audio samples in reverse order
    let totalAudioSamples = audioBuffers.count
    writerAudioInput.requestMediaDataWhenReady(on: queue) {
        for i in 0..<totalAudioSamples-1 {
            if !writerAudioInput.isReadyForMoreMediaData {
                return
            }
            let audioSample = audioBuffers[totalAudioSamples-1-i]
            let timingInfo = timingInfos[i]
            // reverse samples data using timing info
            if let reversedBuffer = audioSample.reverse(timingInfo: [timingInfo]) {
                // append data
                if writerAudioInput.append(reversedBuffer) == false {
                    break
                }
            }
        }

        // finish
        writerAudioInput.markAsFinished()
        Log.info("Audio writing finished!")
        audioFinished = true
        if(videoFinished){
            group.leave()
        }
    }

    group.notify(queue: queue) {
        writer.finishWriting {
            if writer.status != .completed {
                Log.info("VideoWriter reverse video: error - \(String(describing: writer.error))")
                completionBlock?(false)
            } else {
                Log.info("Ended reverse video!")
                completionBlock?(true)
            }
        }
    }
}

愉快地编写代码吧!


你好!这行代码 -- if let reversedBuffer = audioSample.reverse(timingInfo: [timingInfo]) -- 出现了错误:Value of type 'CMSampleBuffer' has no member 'reverse'. 请指教。谢谢。 - Donovan Marsh

2

通过“reading”readerOuput while循环打印出每个缓冲区的样本数,并在“writing”writerInput for循环中重复。这样,您就可以看到所有缓冲区的大小并查看它们是否相加。

例如,如果if(writerInput.readyForMoreMediaData)为false,您会“休眠”,但然后继续反转reversedSamples中的下一个reversedSample(该缓冲区实际上被从writerInput中删除)

更新(基于评论): 我在代码中发现了两个问题:

  1. 输出设置不正确(输入文件为单声道1通道),但输出设置被配置为2通道。它应该是:[NSNumber numberWithInt:1], AVNumberOfChannelsKey。查看输出和输入文件的信息:

enter image description here enter image description here

  1. 第二个问题是您正在反转8192音频样本的643个缓冲区,而不是反转每个音频样本的索引。为了查看每个缓冲区,我将您的调试从查看每个样本的大小更改为查看缓冲区的大小,该大小为8192。因此,第76行现在是:size_t sampleSize = CMSampleBufferGetNumSamples(sample);

输出如下:

2015-03-19 22:26:28.171 audioReverse[25012:4901250] Reading [0]: 8192
2015-03-19 22:26:28.172 audioReverse[25012:4901250] Reading [1]: 8192
...
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [640]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [641]: 8192
2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [642]: 5056


2015-03-19 22:26:28.651 audioReverse[25012:4901250] Writing [0]: 5056
2015-03-19 22:26:28.652 audioReverse[25012:4901250] Writing [1]: 8192
...
2015-03-19 22:26:29.134 audioReverse[25012:4901250] Writing [640]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [641]: 8192
2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [642]: 8192

这表明您正在颠倒每个8192个样本缓冲区的顺序,但在每个缓冲区中,音频仍然是“向前面”的。 我们可以从我拍摄的正确反转(逐样本)与您的缓冲区反转的屏幕截图中看到这一点: enter image description here 如果您还颠倒了每个8192缓冲区的每个样本,则我认为您当前的方案可以起作用。 个人而言,我不建议使用NSArray枚举器进行信号处理,但如果您在样本级别操作,则可以使用它。

我已经使用CMSampleBufferGetSampleSize打印了所有缓冲区的大小,所有缓冲区都是相同的 - 2,然后我检查了两个循环中的缓冲区数量,对于我的示例文件也是相同的 - 643。 - Sasha
如果我直接在第一个while循环中编写示例 - 一切都正常(即使进行了writerInput.readyForMoreMediaData检查)。在这种情况下,结果文件的持续时间与原始文件完全相同。但是,如果我从反向的NSArray中编写相同的示例,则结果会更短。 - Sasha
嗯... cmsamplebuffergetsamplesize 返回2?这似乎很小。你有可运行的项目吗?比如在Github上? - ruoho ruotsi
@ruohoruotsi 那么这个问题的解决方案是什么?我正在运行上面的代码,当声音被反转时会出现卡顿。我已经多次阅读了您的答案,尽管它被标记,但我不知道在sx00提供的代码中该改变什么。 - Sam B
我没有添加代码来反转每个缓冲区。可以这样想,音频被分成8192个连续的块,每个块比如1、2、3被反转为3、2、1,但是每个块中的所有单独的8192个样本仍然面向前方,这就是为什么听起来很卡顿的原因。每个块的样本也必须被反转...这只是从后往前读取缓冲区。这就是为什么出于性能原因,我建议不要在音频处理中使用NSArray。请看这里:http://stackoverflow.com/questions/6593118/how-to-reverse-an-audio-file 希望这可以帮到你。 - ruoho ruotsi

1
extension CMSampleBuffer {

func reverse(timingInfo:[CMSampleTimingInfo]) -> CMSampleBuffer? {
    var blockBuffer: CMBlockBuffer? = nil
    let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)

    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
        self,
        bufferListSizeNeededOut: nil,
        bufferListOut: audioBufferList.unsafeMutablePointer,
        bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
        blockBufferAllocator: nil,
        blockBufferMemoryAllocator: nil,
        flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
        blockBufferOut: &blockBuffer
     )
    
    if let data = audioBufferList.unsafePointer.pointee.mBuffers.mData {
    
        let samples = data.assumingMemoryBound(to: Int16.self)

        let sizeofInt16 = MemoryLayout<Int16>.size
        let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize

        let dataCount = Int(dataSize) / sizeofInt16

        var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
        
        sampleArray.reverse()
        
        var status:OSStatus = noErr
                
        sampleArray.withUnsafeBytes { sampleArrayPtr in
            if let baseAddress = sampleArrayPtr.baseAddress {
                let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
                let rawPtr = UnsafeRawPointer(bufferPointer)
                        
                status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
            }
        }

        if status != noErr {
            return nil
        }
        
        let formatDescription = CMSampleBufferGetFormatDescription(self)
        let numberOfSamples = CMSampleBufferGetNumSamples(self)

        var newBuffer:CMSampleBuffer?
        
        guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
            return self
        }

        return newBuffer
    }
    return nil
}
}

错过的功能!

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接