使用Metal Swift并行计算数组值的总和

12

我正在尝试使用Metal Swift并行计算大数组的总和。

有没有更好的方法?

我的计划是将数组分成子数组,在并行计算中计算一个子数组的总和,然后在并行计算完成时计算子总和的总和。

例如,如果我有

array = [a0,....an] 

我将数组分为子数组:

array_1 = [a_0,...a_i],
array_2 = [a_i+1,...a_2i],
....
array_n/i = [a_n-1, ... a_n]

这些数组的总和是并行计算出来的,得到了

sum_1, sum_2, sum_3, ... sum_n/1

最后只需计算子总和的总和。

我创建了一个运行我的金属着色器的应用程序,但有些事情我不太理解。

        var array:[[Float]] = [[1,2,3], [4,5,6], [7,8,9]]

        // get device
        let device: MTLDevice! = MTLCreateSystemDefaultDevice()

        // get library
        let defaultLibrary:MTLLibrary! = device.newDefaultLibrary()

        // queue
        let commandQueue:MTLCommandQueue! = device.newCommandQueue()

        // function
        let kernerFunction: MTLFunction! = defaultLibrary.newFunctionWithName("calculateSum")

        // pipeline with function
        let pipelineState: MTLComputePipelineState! = try device.newComputePipelineStateWithFunction(kernerFunction)

        // buffer for function
        let commandBuffer:MTLCommandBuffer! = commandQueue.commandBuffer()

        // encode function
        let commandEncoder:MTLComputeCommandEncoder = commandBuffer.computeCommandEncoder()

        // add function to encode
        commandEncoder.setComputePipelineState(pipelineState)

        // options
        let resourceOption = MTLResourceOptions()

        let arrayBiteLength = array.count * array[0].count * sizeofValue(array[0][0])

        let arrayBuffer = device.newBufferWithBytes(&array, length: arrayBiteLength, options: resourceOption)

        commandEncoder.setBuffer(arrayBuffer, offset: 0, atIndex: 0)

        var result:[Float] = [0,0,0]

        let resultBiteLenght = sizeofValue(result[0])

        let resultBuffer = device.newBufferWithBytes(&result, length: resultBiteLenght, options: resourceOption)

        commandEncoder.setBuffer(resultBuffer, offset: 0, atIndex: 1)

        let threadGroupSize = MTLSize(width: 1, height: 1, depth: 1)

        let threadGroups = MTLSize(width: (array.count), height: 1, depth: 1)

        commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize)

        commandEncoder.endEncoding()

        commandBuffer.commit()

        commandBuffer.waitUntilCompleted()

        let data = NSData(bytesNoCopy: resultBuffer.contents(), length: sizeof(Float), freeWhenDone: false)

        data.getBytes(&result, length: result.count * sizeof(Float))

        print(result)

这是我的Swift代码,

我的着色器是:

kernel void calculateSum(const device float *inFloat [[buffer(0)]],
                     device float *result [[buffer(1)]],
                     uint id [[ thread_position_in_grid ]]) {


    float * f = inFloat[id];
    float sum = 0;
    for (int i = 0 ; i < 3 ; ++i) {
        sum = sum + f[i];
    }

    result = sum;
}

我不知道如何定义inFloat为数组的数组。 我不确定threadGroupSize和threadGroups具体是什么。 在着色器属性中,我不知道设备和uint是什么。

这样做是否正确?


你为什么选择Metal? - Mike Henderson
2
首先看看它是如何完成的,其次因为GPU可以进行大量并行计算,而CPU只能进行2或4个。 - Marko Zadravec
1
你的“大数组”有多大?你打算在macOS或iOS上运行吗? - Mike Henderson
1
数组将有 200,000,000 或 2 百万的大小。我将在 iOS 上运行它。 - Marko Zadravec
抱歉,类型错误:200,000或2,000,000。 - Marko Zadravec
3个回答

27

我花时间用Metal创建了一个完整可运行的例子,解释在注释中:

let count = 10_000_000
let elementsPerSum = 10_000

// Data type, has to be the same as in the shader
typealias DataType = CInt

let device = MTLCreateSystemDefaultDevice()!
let library = self.library(device: device)
let parsum = library.makeFunction(name: "parsum")!
let pipeline = try! device.makeComputePipelineState(function: parsum)

// Our data, randomly generated:
var data = (0..<count).map{ _ in DataType(arc4random_uniform(100)) }
var dataCount = CUnsignedInt(count)
var elementsPerSumC = CUnsignedInt(elementsPerSum)
// Number of individual results = count / elementsPerSum (rounded up):
let resultsCount = (count + elementsPerSum - 1) / elementsPerSum

// Our data in a buffer (copied):
let dataBuffer = device.makeBuffer(bytes: &data, length: MemoryLayout<DataType>.stride * count, options: [])!
// A buffer for individual results (zero initialized)
let resultsBuffer = device.makeBuffer(length: MemoryLayout<DataType>.stride * resultsCount, options: [])!
// Our results in convenient form to compute the actual result later:
let pointer = resultsBuffer.contents().bindMemory(to: DataType.self, capacity: resultsCount)
let results = UnsafeBufferPointer<DataType>(start: pointer, count: resultsCount)

let queue = device.makeCommandQueue()!
let cmds = queue.makeCommandBuffer()!
let encoder = cmds.makeComputeCommandEncoder()!

encoder.setComputePipelineState(pipeline)

encoder.setBuffer(dataBuffer, offset: 0, index: 0)

encoder.setBytes(&dataCount, length: MemoryLayout<CUnsignedInt>.size, index: 1)
encoder.setBuffer(resultsBuffer, offset: 0, index: 2)
encoder.setBytes(&elementsPerSumC, length: MemoryLayout<CUnsignedInt>.size, index: 3)

// We have to calculate the sum `resultCount` times => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up) because each threadgroup will process `threadExecutionWidth` threads
let threadgroupsPerGrid = MTLSize(width: (resultsCount + pipeline.threadExecutionWidth - 1) / pipeline.threadExecutionWidth, height: 1, depth: 1)

// Here we set that each threadgroup should process `threadExecutionWidth` threads, the only important thing for performance is that this number is a multiple of `threadExecutionWidth` (here 1 times)
let threadsPerThreadgroup = MTLSize(width: pipeline.threadExecutionWidth, height: 1, depth: 1)

encoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
encoder.endEncoding()

var start, end : UInt64
var result : DataType = 0

start = mach_absolute_time()
cmds.commit()
cmds.waitUntilCompleted()
for elem in results {
    result += elem
}

end = mach_absolute_time()

print("Metal result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))")
result = 0

start = mach_absolute_time()
data.withUnsafeBufferPointer { buffer in
    for elem in buffer {
        result += elem
    }
}
end = mach_absolute_time()

print("CPU result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))")

我用我的 Mac 进行测试,但在 iOS 上应该也能很好地工作。
Metal result: 494936505, time: 0.024611456
CPU result: 494936505, time: 0.163341018

金属版本大约快了7倍。如果您实现类似于截断的分治算法,或者其他方法,您肯定可以获得更快的速度。


好的,谢谢。我有点困惑,因为你获取整个数组和索引(并且在Metal着色器中无法打印到日志),所以我以为它被称为n次。 - Marko Zadravec
1
@MarkoZadravec DataType必须与着色器使用的类型相同=>如果你将其更改为例如CInt,你还需要在着色器中将其更改为int。请记住始终使用具有C前缀的Swift等效类型,因为Metal使用C数据类型。我已经更新了我的答案,使用了CInt并修复了类型,请查看更改详情 - Kametrixom
Kametrixom,感谢您的回答。我有一些关于这个问题的额外问题:http://stackoverflow.com/questions/38232640/swift-metal-parallel-sum-calculation-of-array-on-ios。 - Marko Zadravec
如果你将行sums[resultIndex] += data[dataIndex];更改为在本地变量中求和,然后仅使用一次写操作将该总和写入sums,那么你可能会获得更好的性能。这样可以减少内存访问。 - Mathias Claassen
使用8线程CPU的多线程向量求和,我的计时结果如下:金属圈时间:177.420020,CPU MT圈时间:42.018056,CPU速度是GPU矩阵运算的4倍,但在等效操作中使用GPU时,它比CPU快20倍。 - roberto
显示剩余5条评论

7
被接受的答案很让人烦恼,因为它缺少相应的内核。这里有一个源代码地址,但是下面是完整程序和着色器,可以作为Swift命令行应用程序运行。
/*
 * Command line Metal Compute Shader for data processing
 */

import Metal
import Foundation
//------------------------------------------------------------------------------
let count = 10_000_000
let elementsPerSum = 10_000
//------------------------------------------------------------------------------
typealias DataType = CInt // Data type, has to be the same as in the shader
//------------------------------------------------------------------------------
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()!
let parsum = library.makeFunction(name: "parsum")!
let pipeline = try! device.makeComputePipelineState(function: parsum)
//------------------------------------------------------------------------------
// Our data, randomly generated:
var data = (0..<count).map{ _ in DataType(arc4random_uniform(100)) }
var dataCount = CUnsignedInt(count)
var elementsPerSumC = CUnsignedInt(elementsPerSum)
// Number of individual results = count / elementsPerSum (rounded up):
let resultsCount = (count + elementsPerSum - 1) / elementsPerSum
//------------------------------------------------------------------------------
// Our data in a buffer (copied):
let dataBuffer = device.makeBuffer(bytes: &data, length: MemoryLayout<DataType>.stride * count, options: [])!
// A buffer for individual results (zero initialized)
let resultsBuffer = device.makeBuffer(length: MemoryLayout<DataType>.stride * resultsCount, options: [])!
// Our results in convenient form to compute the actual result later:
let pointer = resultsBuffer.contents().bindMemory(to: DataType.self, capacity: resultsCount)
let results = UnsafeBufferPointer<DataType>(start: pointer, count: resultsCount)
//------------------------------------------------------------------------------
let queue = device.makeCommandQueue()!
let cmds = queue.makeCommandBuffer()!
let encoder = cmds.makeComputeCommandEncoder()!
//------------------------------------------------------------------------------
encoder.setComputePipelineState(pipeline)
encoder.setBuffer(dataBuffer, offset: 0, index: 0)
encoder.setBytes(&dataCount, length: MemoryLayout<CUnsignedInt>.size, index: 1)
encoder.setBuffer(resultsBuffer, offset: 0, index: 2)
encoder.setBytes(&elementsPerSumC, length: MemoryLayout<CUnsignedInt>.size, index: 3)
//------------------------------------------------------------------------------
// We have to calculate the sum `resultCount` times => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up) because each threadgroup will process `threadExecutionWidth` threads
let threadgroupsPerGrid = MTLSize(width: (resultsCount + pipeline.threadExecutionWidth - 1) / pipeline.threadExecutionWidth, height: 1, depth: 1)

// Here we set that each threadgroup should process `threadExecutionWidth` threads, the only important thing for performance is that this number is a multiple of `threadExecutionWidth` (here 1 times)
let threadsPerThreadgroup = MTLSize(width: pipeline.threadExecutionWidth, height: 1, depth: 1)
//------------------------------------------------------------------------------
encoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
encoder.endEncoding()
//------------------------------------------------------------------------------
var start, end : UInt64
var result : DataType = 0
//------------------------------------------------------------------------------
start = mach_absolute_time()
cmds.commit()
cmds.waitUntilCompleted()
for elem in results {
    result += elem
}

end = mach_absolute_time()
//------------------------------------------------------------------------------
print("Metal result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))")
//------------------------------------------------------------------------------
result = 0

start = mach_absolute_time()
data.withUnsafeBufferPointer { buffer in
    for elem in buffer {
        result += elem
    }
}
end = mach_absolute_time()

print("CPU result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))")
//------------------------------------------------------------------------------

#include <metal_stdlib>
using namespace metal;

typedef unsigned int uint;
typedef int DataType;

kernel void parsum(const device DataType* data [[ buffer(0) ]],
                   const device uint& dataLength [[ buffer(1) ]],
                   device DataType* sums [[ buffer(2) ]],
                   const device uint& elementsPerSum [[ buffer(3) ]],
                   
                   const uint tgPos [[ threadgroup_position_in_grid ]],
                   const uint tPerTg [[ threads_per_threadgroup ]],
                   const uint tPos [[ thread_position_in_threadgroup ]]) {
    
    uint resultIndex = tgPos * tPerTg + tPos;
    
    uint dataIndex = resultIndex * elementsPerSum; // Where the summation should begin
    uint endIndex = dataIndex + elementsPerSum < dataLength ? dataIndex + elementsPerSum : dataLength; // The index where summation should end
    
    for (; dataIndex < endIndex; dataIndex++)
        sums[resultIndex] += data[dataIndex];
}

Objective-C

下面是使用 Objective-C 实现的相同 Swift 命令行程序:

#import <Foundation/Foundation.h>
#import <Metal/Metal.h>

typedef int DataType;

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        unsigned int count = 10000000;
        unsigned int elementsPerSum = 10000;
        //----------------------------------------------------------------------
        id<MTLDevice> device  = MTLCreateSystemDefaultDevice();
        id<MTLLibrary>library = [device newDefaultLibrary];
        
        id<MTLFunction>parsum = [library newFunctionWithName:@"parsum"];
        id<MTLComputePipelineState> pipeline = [device newComputePipelineStateWithFunction:parsum error:nil];
        //----------------------------------------------------------------------
        DataType* data = (DataType*) malloc(sizeof(DataType) * count);
        for (int i = 0; i < count; i++){
            data[i] = arc4random_uniform(100);
        }
        unsigned int dataCount = count;
        unsigned int elementsPerSumC = elementsPerSum;
        unsigned int resultsCount = (count + elementsPerSum - 1) / elementsPerSum;
        //------------------------------------------------------------------------------
        id<MTLBuffer>dataBuffer = [device newBufferWithBytes:data
                                                      length:(sizeof(int) * count)
                                                     options:MTLResourceStorageModeManaged];
        
        id<MTLBuffer>resultsBuffer = [device newBufferWithLength:(sizeof(int) * count)
                                                         options:0];
        
        DataType* results = resultsBuffer.contents;
        //----------------------------------------------------------------------
        id<MTLCommandQueue>queue = [device newCommandQueue];
        id<MTLCommandBuffer>cmds = [queue commandBuffer];
        id<MTLComputeCommandEncoder> encoder = [cmds computeCommandEncoder];
        //----------------------------------------------------------------------
        [encoder setComputePipelineState:pipeline];
        [encoder setBuffer:dataBuffer offset:0 atIndex:0];
        [encoder setBytes:&dataCount length:sizeof(unsigned int) atIndex:1];
        [encoder setBuffer:resultsBuffer offset:0 atIndex:2];
        [encoder setBytes:&elementsPerSumC length:sizeof(unsigned int) atIndex:3];
        //----------------------------------------------------------------------
        MTLSize threadgroupsPerGrid =
        {
            (resultsCount + pipeline.threadExecutionWidth - 1) / pipeline.threadExecutionWidth,
            1,
            1
        };
        
        MTLSize threadsPerThreadgroup =
        {
            pipeline.threadExecutionWidth,
            1,
            1
        };
        //----------------------------------------------------------------------
        [encoder dispatchThreadgroups:threadgroupsPerGrid threadsPerThreadgroup:threadsPerThreadgroup];
        [encoder endEncoding];
        //----------------------------------------------------------------------
        uint64_t start, end;
        DataType result = 0;
        
        start = mach_absolute_time();
        [cmds commit];
        [cmds waitUntilCompleted];

        for (int i = 0; i < resultsCount; i++){
            result += results[i];
        }

        end = mach_absolute_time();

        NSLog(@"Metal Result %d. time %f", result, (float)(end - start)/(float)(NSEC_PER_SEC));
        //----------------------------------------------------------------------
        result = 0;

        start = mach_absolute_time();

        for (int i = 0; i < count; i++){
            result += data[i];
        }

        end = mach_absolute_time();
        NSLog(@"Metal Result %d. time %f", result, (float)(end - start)/(float)(NSEC_PER_SEC));

        //------------------------------------------------------------------------------
        free(data);
    }
    return 0;
}


谢谢您!我发现现在需要使用let device = MTLCopyAllDevices()[0]来设置设备,因为当使用MTLCreateSystemDefaultDevice()时,编译器会抱怨程序现在是非交互式的。 - Blark

-3

我已经在GT 740(384个核心)和i7-4790上运行了一个多线程向量求和实现的应用程序,这是我的数据:

Metal lap time: 19.959092
cpu MT lap time: 4.353881

这是CPU的5/1比率,所以除非您有强大的GPU使用着色器不值得。

我一直在测试相同的代码在i7-3610qm w / igpu intel hd 4000中,令人惊讶的结果对于金属来说要好得多:2/1

编辑后:经过调整线程参数,我最终改善了GPU性能,现在达到了16倍的CPU。


你能否发布使用高度和深度矩阵计算的更新解决方案? - Arjun Mehta

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接