我正在尝试在iOS中实现实时FFT,为此我正在使用Accelerate框架。这里是我的Swift代码。
class FFT {
private var fftSetup: FFTSetup?
private var log2n: Float?
private var length: Int?
func initialize(count: Int){
length = count
log2n = log2(Float(length!))
self.fftSetup = vDSP_create_fftsetup(vDSP_Length(log2n!), FFTRadix(kFFTRadix2))!
}
func computeFFT(input: [Float]) -> ([Float], [Float]) {
var real = input
var imag = [Float](repeating: 0.0, count: input.count)
var splitComplexBuffer = DSPSplitComplex(realp: &real, imagp: &imag)
let halfLength = (input.count/2) + 1
real = [Float](repeating: 0.0, count: halfLength)
imag = [Float](repeating: 0.0, count: halfLength)
// input is alternated across the real and imaginary arrays of the DSPSplitComplex structure
splitComplexBuffer = DSPSplitComplex(fromInputArray: input, realParts: &real, imaginaryParts: &imag)
// even though there are 2 real and 2 imaginary output elements, we still need to ask the fft to process 4 input samples
vDSP_fft_zrip(fftSetup!, &splitComplexBuffer, 1, vDSP_Length(log2n!), FFTDirection(FFT_FORWARD))
// zrip results are 2x the standard FFT and need to be scaled
var scaleFactor = Float(1.0/2.0)
vDSP_vsmul(splitComplexBuffer.realp, 1, &scaleFactor, splitComplexBuffer.realp, 1, vDSP_Length(halfLength))
vDSP_vsmul(splitComplexBuffer.imagp, 1, &scaleFactor, splitComplexBuffer.imagp, 1, vDSP_Length(halfLength))
return (real, imag)
}
func computeIFFT(real: [Float], imag: [Float]) -> [Float]{
var real = [Float](real)
var imag = [Float](imag)
var result : [Float] = [Float](repeating: 0.0, count: length!)
var resultAsComplex : UnsafeMutablePointer<DSPComplex>? = nil
result.withUnsafeMutableBytes {
resultAsComplex = $0.baseAddress?.bindMemory(to: DSPComplex.self, capacity: 512)
}
var splitComplexBuffer = DSPSplitComplex(realp: &real, imagp: &imag)
vDSP_fft_zrip(fftSetup!, &splitComplexBuffer, 1, vDSP_Length(log2n!), FFTDirection(FFT_INVERSE));
vDSP_ztoc(&splitComplexBuffer, 1, resultAsComplex!, 2, vDSP_Length(length! / 2));
//
//// Neither the forward nor inverse FFT does any scaling. Here we compensate for that.
var scale : Float = 1.0/Float(length!);
var copyOfResult = result;
vDSP_vsmul(&result, 1, &scale, ©OfResult, 1, vDSP_Length(length!));
result = copyOfResult
return result
}
func deinitialize(){
vDSP_destroy_fftsetup(fftSetup)
}
}
这是Python代码,用于计算rFFT和irFFT
# calculate fft of input block
in_block_fft = np.fft.rfft(np.squeeze(in_buffer)).astype("complex64")
# apply mask and calculate the ifft
estimated_block = np.fft.irfft(in_block_fft * out_mask)
问题:
Swift 如果我对512帧计算rFFT,并对rFFT的结果应用irFFT,我会得到相同的原始数组。
Python 同样适用于Python,如果我进行rFFT和irFFT,我会得到相同的原始数组。
问题 问题出现在我比较Swift rFFT和Python rFFT的结果时。它们的结果在小数值上有所不同。有时实部相同,但虚部完全不同。
我尝试了Python中的不同框架,如Numpy、SciPy和TensorFlow,它们的结果完全相同(小数部分略有不同)。但是,当我在iOS上使用上面的Swift代码对相同的输入计算rfft时,结果不同。
如果有经验的加速框架和FFT知识方面的人能帮助我解决这个问题,那将非常有帮助。我对FFT的了解有限。