使用线程无法提高向量求和的速度

16

我有一个C++程序,基本上执行一些矩阵计算。对于这些计算,我使用LAPACK / BLAS,并根据平台通常链接到MKL或ACML。许多这些矩阵计算作用于不同的独立矩阵,因此我使用std :: thread的方法来让这些操作并行运行。然而,我发现使用更多线程时没有加速。我将问题追踪到daxpy Blas例程。似乎如果两个线程同时使用此例程,则每个线程需要的时间会增加一倍,即使两个线程作用于不同的数组。

接下来我尝试编写一个新的简单方法来执行向量加法,以替换daxpy例程。使用一个线程,这种新方法与BLAS例程一样快,但是,在使用gcc编译时,它遇到了与BLAS例程相同的问题:将并行运行的线程数量翻倍也会使每个线程所需的时间翻倍,因此没有加速效果。但是,使用Intel C ++编译器,则不会出现这些问题:随着线程数的增加,单个线程所需的时间保持恒定。

但是,在没有Intel编译器的系统上,我也需要进行编译。因此,我的问题是:为什么使用gcc没有速度提升,是否有可能提高gcc性能?

我编写了一个小程序来演示这种情况:

// $(CC) -std=c++11 -O2 threadmatrixsum.cpp -o threadmatrixsum -pthread

#include <iostream>
#include <thread>
#include <vector>

#include "boost/date_time/posix_time/posix_time.hpp"
#include "boost/timer.hpp"

void simplesum(double* a, double* b, std::size_t dim);

int main() {

    for (std::size_t num_threads {1}; num_threads <= 4; num_threads++) {
        const std::size_t N { 936 };

        std::vector <std::size_t> times(num_threads, 0);    

        auto threadfunction = [&](std::size_t tid)
        {
            const std::size_t dim { N * N };
            double* pA = new double[dim];
            double* pB = new double[dim];

            for (std::size_t i {0}; i < N; ++i){
                pA[i] = i;
                pB[i] = 2*i;
            }   

            boost::posix_time::ptime now1 = 
                boost::posix_time::microsec_clock::universal_time();    

            for (std::size_t n{0}; n < 1000; ++n){
                simplesum(pA, pB, dim);
            }

            boost::posix_time::ptime now2 = 
                boost::posix_time::microsec_clock::universal_time(); 
            boost::posix_time::time_duration dur = now2 - now1; 
            times[tid] += dur.total_milliseconds(); 
            delete[] pA;
            delete[] pB;
        };

        std::vector <std::thread> mythreads;

        // start threads
        for (std::size_t n {0} ; n < num_threads; ++n)
        {
            mythreads.emplace_back(threadfunction, n);
        }

        // wait for threads to finish
        for (std::size_t n {0} ; n < num_threads; ++n)
        {
            mythreads[n].join();
            std::cout << " Thread " << n+1 << " of " << num_threads 
                      << "  took " << times[n]<< "msec" << std::endl;
        }
    }
}

void simplesum(double* a, double* b, std::size_t dim){

    for(std::size_t i{0}; i < dim; ++i)
    {*(++a) += *(++b);}
}

使用gcc编译的输出:

Thread 1 of 1  took 532msec
Thread 1 of 2  took 1104msec
Thread 2 of 2  took 1103msec
Thread 1 of 3  took 1680msec
Thread 2 of 3  took 1821msec
Thread 3 of 3  took 1808msec
Thread 1 of 4  took 2542msec
Thread 2 of 4  took 2536msec
Thread 3 of 4  took 2509msec
Thread 4 of 4  took 2515msec

使用 ICC 配置的输出:

Thread 1 of 1  took 663msec
Thread 1 of 2  took 674msec
Thread 2 of 2  took 674msec
Thread 1 of 3  took 681msec
Thread 2 of 3  took 681msec
Thread 3 of 3  took 681msec
Thread 1 of 4  took 688msec
Thread 2 of 4  took 689msec
Thread 3 of 4  took 687msec
Thread 4 of 4  took 688msec

因此,使用icc,一个线程执行计算所需的时间是恒定的(正如我所预期的;我的CPU有4个物理核心),而使用gcc,则一个线程的时间会增加。通过将simplesum例程替换为BLAS :: daxpy,可以获得与icc和gcc相同的结果(毫不奇怪,因为大部分时间都花在库中),这些结果几乎与上述gcc结果相同。


我在VS2013下使用提供的代码,看到与GCC类似的结果。 - Moo-Juice
3
听起来很显然,但是...你实际上有多个处理器吗?此外,典型的矩阵有多大?你可能会遇到高速缓存争用的问题... - twalberg
我很好奇:你有尝试比较每个编译器生成的汇编代码吗?特别是,icc是否消除了除计时/控制台IO调用之外的所有内容?足够智能的优化器可以看到您的分配/操作没有可观察的影响。 - Nathan Ernst
难道不应该是 *(a++) += *(b++) 吗? - danielschemmel
2个回答

18
答案很简单:您的线程正在争夺内存带宽!请考虑,每两次存储(一次初始化,一次加法后)和两次读取(在加法中)执行一次浮点加法。大多数现代系统提供多个CPU实际上必须在多个核之间共享内存控制器。下面是在具有2个物理CPU插槽和12个核心(24个超线程)的系统上运行的结果。您的原始代码正好展示了您的问题:
Thread 1 of 1  took 657msec
Thread 1 of 2  took 1447msec
Thread 2 of 2  took 1463msec
[...]
Thread 1 of 8  took 5516msec
Thread 2 of 8  took 5587msec
Thread 3 of 8  took 5205msec
Thread 4 of 8  took 5311msec
Thread 5 of 8  took 2731msec
Thread 6 of 8  took 5545msec
Thread 7 of 8  took 5551msec
Thread 8 of 8  took 4903msec

然而,仅通过增加算术密度,我们就可以看到可扩展性的显著提高。为了证明这一点,我将您的加法例程更改为同时执行指数运算:*(++a) += std::exp(*(++b));。结果显示几乎完美的扩展性。
Thread 1 of 1  took 7671msec
Thread 1 of 2  took 7759msec
Thread 2 of 2  took 7759msec
[...]
Thread 1 of 8  took 9997msec
Thread 2 of 8  took 8135msec
Thread 3 of 8  took 10625msec
Thread 4 of 8  took 8169msec
Thread 5 of 8  took 10054msec
Thread 6 of 8  took 8242msec
Thread 7 of 8  took 9876msec
Thread 8 of 8  took 8819msec

那 ICC 怎么办?

首先,ICC 内联 simplesum。证明内联发生很简单:使用 icc,禁用多文件间过程优化并将 simplesum 移到其自己的翻译单元中。差别惊人。性能提升了。

Thread 1 of 1  took 687msec
Thread 1 of 2  took 688msec
Thread 2 of 2  took 689msec
[...]
Thread 1 of 8  took 690msec
Thread 2 of 8  took 697msec
Thread 3 of 8  took 700msec
Thread 4 of 8  took 874msec
Thread 5 of 8  took 878msec
Thread 6 of 8  took 874msec
Thread 7 of 8  took 742msec
Thread 8 of 8  took 868msec

Thread 1 of 1  took 1278msec
Thread 1 of 2  took 2457msec
Thread 2 of 2  took 2445msec
[...]
Thread 1 of 8  took 8868msec
Thread 2 of 8  took 8434msec
Thread 3 of 8  took 7964msec
Thread 4 of 8  took 7951msec
Thread 5 of 8  took 8872msec
Thread 6 of 8  took 8286msec
Thread 7 of 8  took 5714msec
Thread 8 of 8  took 8241msec

这已经解释了为什么该库的性能表现不佳:ICC 无法内联它,因此无论其他原因如何导致 ICC 的性能优于 g++,都不会发生。

它还提示了 ICC 在这里可能做得对... 如果 ICC 交换循环,而不是执行1000次simplesum,那会怎样呢?

  1. 加载两个双精度数
  2. 将它们相加1000次(甚至执行 a = 1000 * b)
  3. 存储两个双精度数

这将增加算术密度,而不会向函数中添加任何指数... 如何证明这一点?好吧,首先让我们简单地实现这个优化并看看会发生什么!为了分析,我们将查看 g++ 的性能。回想一下我们的基准测试结果:

Thread 1 of 1  took 640msec
Thread 1 of 2  took 1308msec
Thread 2 of 2  took 1304msec
[...]
Thread 1 of 8  took 5294msec
Thread 2 of 8  took 5370msec
Thread 3 of 8  took 5451msec
Thread 4 of 8  took 5527msec
Thread 5 of 8  took 5174msec
Thread 6 of 8  took 5464msec
Thread 7 of 8  took 4640msec
Thread 8 of 8  took 4055msec

现在,让我们交换

for (std::size_t n{0}; n < 1000; ++n){
    simplesum(pA, pB, dim);
}

使用将内部循环作为外部循环的版本:

double* a = pA; double* b = pB;
for(std::size_t i{0}; i < dim; ++i, ++a, ++b)
{
    double x = *a, y = *b;
    for (std::size_t n{0}; n < 1000; ++n)
    {
        x += y;
    }
    *a = x;
}

结果显示我们正在正确的轨道上:
Thread 1 of 1  took 693msec
Thread 1 of 2  took 703msec
Thread 2 of 2  took 700msec
[...]
Thread 1 of 8  took 920msec
Thread 2 of 8  took 804msec
Thread 3 of 8  took 750msec
Thread 4 of 8  took 943msec
Thread 5 of 8  took 909msec
Thread 6 of 8  took 744msec
Thread 7 of 8  took 759msec
Thread 8 of 8  took 904msec

这证明循环互换优化确实是ICC表现出色的主要原因。
请注意,测试过的编译器(MSVC、ICC、g++和clang)都不会用乘法替换循环,即使在单线程和8线程情况下,这样做可以提高性能200倍和15倍。这是因为重复加法的数值不稳定性可能导致用单个乘法替换时结果大相径庭。当使用整数数据类型而不是浮点数据类型进行测试时,此优化会发生。
我们如何强制g++执行此优化?
有趣的是,对于g++来说,真正致命的不是不能执行循环互换。当使用-floop-interchange调用时,g++也可以执行此优化。但只有在胜算显著的情况下才会这样做。
  1. 所有边界都使用 int 表示,而非 std::size_tlongunsigned int。我仍然很难相信,但它似乎是一个硬性要求。

  2. 使用索引而非指针增量:a[i] += b[i];

  3. G++ 需要使用 -floop-interchange 参数。简单的 -O3 不足以满足要求。

当三个条件都满足时,g++ 的性能类似于 ICC:

Thread 1 of 1  took 714msec
Thread 1 of 2  took 724msec
Thread 2 of 2  took 721msec
[...]
Thread 1 of 8  took 782msec
Thread 2 of 8  took 1221msec
Thread 3 of 8  took 1225msec
Thread 4 of 8  took 781msec
Thread 5 of 8  took 788msec
Thread 6 of 8  took 1262msec
Thread 7 of 8  took 1226msec
Thread 8 of 8  took 820msec

注意:本实验中使用的g++版本为x64 Arch linux上的4.9.0。

除了原始代码,它们都比“缩放”代码运行得快得多 :) - Moo-Juice
1
@Moo-Juice 但是显然它们计算出了错误的结果 ;) - danielschemmel
谢谢回复,我认为这可能与内存带宽有关。我尝试使用OpenMP并行化simplesum方法中的for循环。在主程序中只使用一个线程,并增加此循环的线程数,我观察到了良好的加速效果。 因此,我的理解是:如果问题与并行内存访问有关,则在并行化的for循环中也应该看到这一点,因为它访问相同的内存。然而,事实并非如此。 - Fabian
1
@Fabian 我已经查看了二进制的 icc 生成物,并添加了一个解释,说明为什么 icc 在这里表现更好。 - danielschemmel
好的,出于好奇我会感兴趣。但对于我的实际问题,我不能依赖循环交换,所以我想出了一个解决方法,大大加快了速度(请参见我的答案)。非常感谢您的努力和建议,它们确实帮了我很多的忙 :-) - Fabian
显示剩余4条评论

0

好的,我得出结论,主要问题在于处理器并行地作用于内存的不同部分,因此我认为必须处理大量的缓存未命中,这会进一步减慢进程。将实际的求和函数放入关键部分。

summutex.lock();
simplesum(pA, pB, dim);
summutex.unlock();

解决了缓存未命中的问题,但当然不能产生最佳加速。无论如何,因为现在其他线程被阻塞,simplesum方法可以利用所有可用的线程进行求和。

void simplesum(double* a, double* b, std::size_t dim, std::size_t numberofthreads){

    omp_set_num_threads(numberofthreads);
    #pragma omp parallel 

    {
    #pragma omp for
    for(std::size_t i = 0; i < dim; ++i)
    {
      a[i]+=b[i];

    }
    }
}

在这种情况下,所有线程都在同一块内存上工作:它应该在处理器缓存中,如果处理器需要将内存的其他部分加载到其缓存中,则其他线程也会受益(取决于这是L1还是L2缓存,但我认为细节对于本讨论并不重要)。

我不断言这个解决方案是完美的或接近最优的,但它似乎比原始代码要好得多。而且它不依赖于一些循环切换技巧,我无法在我的实际代码中使用。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接