在内联汇编中访问线程本地变量

3

我正在处理一些使用了内联汇编优化的C++代码。但是优化版本表现出线程不安全的行为,这可以追溯到从汇编内部广泛访问的3个全局变量。

__attribute__ ((aligned (16))) unsigned int SHAVITE_MESS[16];
__attribute__ ((aligned (16))) thread_local unsigned char SHAVITE_PTXT[8*4];
__attribute__ ((aligned (16))) unsigned int SHAVITE_CNTS[4] = {0,0,0,0};

...

asm ("movaps xmm0, SHAVITE_PTXT[rip]");
asm ("movaps xmm1, SHAVITE_PTXT[rip+16]");
asm ("movaps xmm3, SHAVITE_CNTS[rip]");
asm ("movaps xmm4, SHAVITE256_XOR2[rip]");
asm ("pxor   xmm2,  xmm2");

我天真地认为解决这个问题的最简单方法是将变量设为 thread_local,但是这会导致汇编中出现段错误 - 似乎汇编不知道这些变量是线程本地的?

我已经在一个小的 thread_local 测试用例的汇编代码中挖掘了一下,看看gcc如何处理它们mov eax, DWORD PTR fs:num1@tpoff并尝试修改代码以执行相同操作:

asm ("movaps xmm0, fs:SHAVITE_PTXT@tpoff");
asm ("movaps xmm1, fs:SHAVITE_PTXT@tpoff+16");
asm ("movaps xmm3, fs:SHAVITE_CNTS@tpoff");
asm ("movaps xmm4, fs:SHAVITE256_XOR2@tpoff");
asm ("pxor   xmm2,  xmm2");

如果所有变量也是thread_local,它就能正常工作,它也与参考实现(非汇编)匹配,因此似乎成功地运行了。 但是这似乎非常依赖于CPU。如果我使用-m32编译,输出结果将为mov eax, DWORD PTR gs:num1@ntpoff 由于代码已经具体针对'x86'(使用aes-ni),我可以猜测只需反编译并实现所有可能的变体即可。
然而,我并不太喜欢这种解决方案,感觉有点像猜测式编程。而且,这样做并不能帮助我学习如何处理未来可能不太特定于一个架构的情况。
是否有更通用/正确的方法来处理这个问题? 在更通用的方式下,我如何告诉汇编变量是线程本地的? 或者,是否有一种方法可以传递变量,使其无需知道也能正常工作?

1
如果您不使用内联汇编实现它,并将结果与自己的汇编代码进行比较,您可能会发现一些提示。内联汇编是否是遗留代码?如果是,编译器的新版本可能会对其进行足够的优化,以便您可以完全放弃内联汇编。 - Ted Lyngmo
1
@TedLyngmo 汇编和引用版本差别很大(汇编速度明显更快),因为汇编利用了向量化、aesenc和其他指令,而编译器无法做到这一点。理论上,我可以查看小型测试用例中线程本地变量访问所生成的汇编代码,但我怀疑在不同的编译条件下会有所不同,因此我正在寻找更通用的“正确方法”来处理这个问题。 - Malcolm MacLeod
1
考虑到gcc有内置函数(如__builtin_ia32_pxor)直接访问mmx/sse指令,我原本以为你可以做得很接近。但是有时优化器的确会做得很糟糕。你考虑过使用带参数的纯汇编吗?既然你正在使用gcc,为什么不使用输入/输出操作数传递参数呢?你不会真的像这样执行多个汇编语句,对吧?文档已经非常明确了。 - David Wohlferd
1
请注意,虽然线程本地变量可以使代码线程安全,但它仍然不可重入。您的代码看起来像是误用了gcc内联汇编(似乎您假设寄存器在asm语句之间保留其内容),而且可能有更好的解决方案来实现您想要的功能。 - fuz
所以,我使用内嵌函数重写了那个supercop的东西(所有的asm语句都消失了)。然而,这个例程足够复杂(转换过程也很繁琐),让我对结果不是很有信心。我不知道你是否有一个简单的测试例程可以用来验证?此外,你的硬件能力是什么? 可以假设支持 SSE4.2 吗? AVX2?只是重新构建带有AVX2的程序就可以节省大量指令(所有这些三操作数指令都会产生差异),但除非我可以运行它,否则我不能说它是否更快(或者甚至正确)。 - David Wohlferd
显示剩余4条评论
2个回答

4
如果您当前的代码为每个指令使用单独的“基本”汇编语句,那么它写得很糟糕,并通过不告知编译器而破坏XMM寄存器来欺骗编译器。这不是您使用GNU C内嵌汇编的方式。
您应该使用AES-NI和SIMD内部函数(例如_mm_aesdec_si128)进行重写,以便编译器将为所有内容发出正确的寻址模式。https://gcc.gnu.org/wiki/DontUseInlineAsm
如果你真的想继续使用GNU C内联汇编,可以使用具有输入/输出"+m"操作数的扩展汇编,它可以是本地变量或任何你想要的C变量,包括静态或线程本地变量。有关内联汇编指南的链接,请参见https://stackoverflow.com/tags/inline-assembly/info
但希望您可以将它们自动存储在函数内部,或者让调用者分配并传递指向上下文的指针,而不是根本不使用静态或线程本地存储。线程本地访问速度稍慢,因为非零段基础会减慢负载执行单元中的地址计算。我认为当地址足够早时可能不是什么问题,但请确保您实际需要TLS,而不仅仅是堆栈上的临时空间或由调用者提供的空间。它还会增加代码大小。
当GCC在模板中为“m”操作数约束填充%0%[named]操作数时,它使用适当的寻址模式。无论是fs:SHAVITE_PTXT@tpoff+16还是XMMWORD PTR [rsp-24]XMMWORD PTR _ZZ3foovE15SHAVITE256_XOR2[rip](对于函数局部静态变量),它都可以正常工作。(只要您不遇到与Intel语法的操作数大小不匹配,在那种情况下,编译器将用内存操作数填充它,而不是像AT&T语法模式一样留给助记符后缀。)
像这样,使用全局、TLS全局、本地自动和本地静态变量来演示它们的工作方式相同。
// compile with -masm=intel

//#include <stdalign.h>  // for C11
alignas(16) unsigned int SHAVITE_MESS[16];                 // global (static storage)
alignas(16) thread_local unsigned char SHAVITE_PTXT[8*4];  // TLS global

void foo() {
    alignas(16) unsigned int SHAVITE_CNTS[4] = {0,0,0,0};   // automatic storage (initialized)
    alignas(16) static unsigned int SHAVITE256_XOR2[4];     // local static

    asm (
        "movaps xmm0, xmmword ptr %[PTXT]     \n\t"
        "movaps xmm1, xmmword ptr %[PTXT]+16  \n\t"   // x86 addressing modes are always offsetable
        "pxor   xmm2,  xmm2       \n\t"          // mix shorter insns with longer insns to help decode and uop-cache packing
        "movaps xmm3, xmmword ptr %[CNTS]+0     \n\t"
        "movaps xmm4, xmmword ptr %[XOR2_256]"

       : [CNTS] "+m" (SHAVITE_CNTS),    // outputs and read/write operands
         [PTXT] "+m" (SHAVITE_PTXT),
         [XOR2_256] "+m" (SHAVITE256_XOR2)

       : [MESS] "m" (SHAVITE_MESS)      // read-only inputs

       : "xmm0", "xmm1", "xmm2", "xmm3", "xmm4"  // clobbers: list all you use
    );
}

如果避免使用xmm8..15,或者用#ifdef __x86_64__保护它,您可以使其在32位和64位模式之间可移植。
请注意,操作数[PTXT] "+m" (SHAVITE_PTXT)表示整个数组是输入/输出,当SHAVITE_PTXT是真正的数组而不是char*时。
它当然会扩展到对象的起始地址模式,但是您可以通过像+16这样的常量来偏移它。汇编器接受[rsp-24]+16等同于[rsp-8],因此对于基址寄存器或静态地址,它只需工作即可。
告诉编译器整个数组是输入和/或输出意味着即使在内联后,它也可以安全地围绕asm语句进行优化。例如,编译器知道写入更高的数组元素对asm的输入/输出也是相关的,而不仅仅是第一个字节。它无法跨过asm保留后面的元素,或重新排序对这些数组的加载/存储。
如果你使用了SHAVITE_PTXT[0](即使是指针),编译器也会将操作数作为英特尔语法的byte ptr foobar。但幸运的是,通过xmmword ptr byte ptr,第一个匹配项优先,并且与movapsxmm0,xmmword ptr%[foo]`的操作数大小相匹配。(在AT&T语法中,如果必要,助记符通过后缀携带操作数大小;编译器不会填充任何内容,因此您不会遇到这个问题。)
你的一些数组恰好是16字节大小,所以编译器已经填充了xmmword ptr,但是冗余也没关系。
如果你只有指针而不是数组,请参见How can I indicate that the memory *pointed* to by an inline ASM argument may be used?,了解"m" (*(unsigned (*)[16]) SHAVITE_MESS)语法。你可以将其用作真正的输入操作数,或者作为指针的“虚拟”输入和"+r"操作数并存。
或者更好的做法是,请求一个SIMD寄存器输入、输出或读/写操作数,例如[PTXT16] "+x"(*(__m128i)&array[16])。它可以选择您没有声明清除的任何XMM寄存器。使用#include <immintrin.h>来定义__m128i,或者使用GNU C本地向量语法自己定义。__m128i使用__attribute__((may_alias)),使得指针转换不会创建严格别名UB。如果编译器可以内联这个函数,并在汇编语句中保持一个局部变量在XMM寄存器中,那么这种方法尤其有效,而不是手写汇编进行存储/重新加载以保持内存中的变量。
上述源代码的编译器输出:

编译器输出

使用gcc9.2从Godbolt编译器浏览器获取的内容。这只是在模板中填充%[stuff]后编译器的汇编文本输出。
# g++ -O3 -masm=intel
foo():
        pxor    xmm0, xmm0
        movaps  XMMWORD PTR [rsp-24], xmm0      # compiler-generated zero-init array

        movaps xmm0, xmmword ptr fs:SHAVITE_PTXT@tpoff     
        movaps xmm1, xmmword ptr fs:SHAVITE_PTXT@tpoff+16  
        pxor   xmm2,  xmm2       
        movaps xmm3, xmmword ptr XMMWORD PTR [rsp-24]+0     
        movaps xmm4, xmmword ptr XMMWORD PTR foo()::SHAVITE256_XOR2[rip]
        ret

这是已组装二进制输出的反汇编结果:
foo():
 pxor   xmm0,xmm0
 movaps XMMWORD PTR [rsp-0x18],xmm0   # compiler-generated

 movaps xmm0,XMMWORD PTR fs:0xffffffffffffffe0
 movaps xmm1,XMMWORD PTR fs:0xfffffffffffffff0    # note the +16 worked
 pxor   xmm2,xmm2
 movaps xmm3,XMMWORD PTR [rsp-0x18]               # note the +0 assembled without syntax error
 movaps xmm4,XMMWORD PTR [rip+0x200ae5]        # 601080 <foo()::SHAVITE256_XOR2>
 ret

还要注意的是,非TLS全局变量使用了RIP相对寻址模式,但TLS变量没有使用,而是使用了带符号扩展的[disp32]绝对寻址模式。

(在位置相关代码中,你理论上可以使用RIP相对寻址模式来生成一个相对于TLS基址的小绝对地址。但我认为GCC并没有这样做。)


好的,我来说一下:为什么没有提到“v”[约束](https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html)?显然,它对于数组或偏移量无效,但如果您列出所有访问xmms的方式,似乎它应该列在列表中。我希望它可以很好地处理线程变量。 - David Wohlferd
@DavidWohlferd:除非您假定AVX512VL,否则最好使用“x”,如果您想要向量寄存器中的数据,这样您就不会在64位模式下得到xmm16..31。但是,一个“+x”(*(__m128i)&array [16])操作数将起作用。我假设OP想要在内联asm中执行所有自己的加载/存储并将数据保留在内存中或将其用作临时空间。但是,既然我提到了本地变量,我应该提到让编译器在XMM寄存器中接受输入/输出。 - Peter Cordes
啊,我就知道一定有原因。如果不问就学不会。 - David Wohlferd
@DavidWohlferd:虽然那是个好建议;我确实添加了一个关于使用“x”约束的段落,因为我之前忽略了它。使用clobbers可以安全地仍然使用一些硬编码寄存器。 - Peter Cordes
1
谢谢,这非常有帮助。你说得对,一个本地的(用户传递的上下文)比全局的好多了。 - Malcolm MacLeod

2
正如另一个答案所说,内联汇编代码混乱且被误用。使用intrinsic重写应该是个不错的选择,并允许您在编译时使用或不使用-mavx(或-march=haswell-march=znver1或其他)以便让编译器保存一堆寄存器复制指令。
此外,还可以使编译器优化(向量)寄存器分配和何时进行加载/存储,这是编译器擅长的事情。
好吧,我无法使用您提供的测试数据。它使用了多个未在此处提供的例程,而我懒得去找它们。
尽管如此,我还是为测试数据凑合了一些东西。我的E256()返回与你的相同值。这并不意味着我100%正确(您需要进行自己的测试),但鉴于所有的异或/ aesenc 操作,如果有错误,我预计会出现。 使用intrinsics进行转换并不特别困难。大多数情况下,您只需要找到给定asm指令的等效_mm_函数。这点非常重要,还要追踪所有您打字时将x12写成x13的地方(烦人的事情)。
请注意,虽然此代码使用名为x0-x15的变量,但那只是为了使翻译更容易。这些C变量名称与gcc在编译代码时要使用的寄存器之间没有任何关联。此外,gcc利用了大量有关SSE的知识来重新排序指令,因此输出(特别是对于-O3)与原始asm非常不同。如果您认为可以将它们进行比较以检查正确性(就像我一样),则可能会感到沮丧。
此代码包含原始例程(前缀为"old")和新例程,并从main()调用两者以查看它们是否产生相同的输出。我没有努力对内置函数进行任何更改,试图对其进行优化。一旦它工作了,我就停止了。现在,所有代码都是C代码,任何进一步的改进都留给您。
尽管如此,gcc能够优化intrinsic(这是它不能为asm做的事情)。这意味着如果您使用-mavx2重新编译此代码,则生成的代码会非常不同。
一些统计数据:
  • E256()的原始(完全展开)代码占用了287条指令。
  • 使用intrinsic构建而不使用-mavx2需要251个。
  • 使用intrinsic并使用-mavx2需要196个。
我没有做任何计时,但是我认为删除100行汇编代码会有所帮助。另一方面,有时gcc的SSE优化效果不佳,因此不要做出任何假设。
希望这能帮到您。
// Compile with -O3 -msse4.2 -maes
//           or -O3 -msse4.2 -maes -mavx2
#include <wmmintrin.h>
#include <x86intrin.h>
#include <stdio.h>

///////////////////////////
#define tos(a) #a
#define tostr(a) tos(a)

#define rev_reg_0321(j){ asm ("pshufb xmm" tostr(j)", [oldSHAVITE_REVERSE]"); }

#define replace_aes(i, j){ asm ("aesenc xmm" tostr(i)", xmm" tostr(j)""); }

__attribute__ ((aligned (16))) unsigned int oldSHAVITE_MESS[16];
__attribute__ ((aligned (16))) unsigned char oldSHAVITE_PTXT[8*4];
__attribute__ ((aligned (16))) unsigned int oldSHAVITE_CNTS[4] = {0,0,0,0};
__attribute__ ((aligned (16))) unsigned int oldSHAVITE_REVERSE[4] = {0x07060504, 0x0b0a0908, 0x0f0e0d0c, 0x03020100 };
__attribute__ ((aligned (16))) unsigned int oldSHAVITE256_XOR2[4] = {0x0, 0xFFFFFFFF, 0x0, 0x0};
__attribute__ ((aligned (16))) unsigned int oldSHAVITE256_XOR3[4] = {0x0, 0x0, 0xFFFFFFFF, 0x0};
__attribute__ ((aligned (16))) unsigned int oldSHAVITE256_XOR4[4] = {0x0, 0x0, 0x0, 0xFFFFFFFF};

#define oldmixing() do {\
    asm("movaps xmm11, xmm15");\
    asm("movaps xmm10, xmm14");\
    asm("movaps xmm9, xmm13");\
    asm("movaps xmm8, xmm12");\
\
    asm("movaps xmm6, xmm11");\
    asm("psrldq xmm6, 4");\
    asm("pxor xmm8, xmm6");\
    asm("movaps xmm6, xmm8");\
    asm("pslldq xmm6, 12");\
    asm("pxor xmm8, xmm6");\
\
    asm("movaps xmm7, xmm8");\
    asm("psrldq xmm7, 4");\
    asm("pxor xmm9, xmm7");\
    asm("movaps xmm7, xmm9");\
    asm("pslldq xmm7, 12");\
    asm("pxor xmm9, xmm7");\
\
    asm("movaps xmm6, xmm9");\
    asm("psrldq xmm6, 4");\
    asm("pxor xmm10, xmm6");\
    asm("movaps xmm6, xmm10");\
    asm("pslldq xmm6, 12");\
    asm("pxor xmm10, xmm6");\
\
    asm("movaps xmm7, xmm10");\
    asm("psrldq xmm7, 4");\
    asm("pxor xmm11, xmm7");\
    asm("movaps xmm7, xmm11");\
    asm("pslldq xmm7, 12");\
    asm("pxor xmm11, xmm7");\
} while(0);

void oldE256()
{
    asm (".intel_syntax noprefix");

    /* (L,R) = (xmm0,xmm1) */
    asm ("movaps xmm0, [oldSHAVITE_PTXT]");
    asm ("movaps xmm1, [oldSHAVITE_PTXT+16]");
    asm ("movaps xmm3, [oldSHAVITE_CNTS]");
    asm ("movaps xmm4, [oldSHAVITE256_XOR2]");
    asm ("pxor xmm2, xmm2");

    /* init key schedule */
    asm ("movaps xmm8, [oldSHAVITE_MESS]");
    asm ("movaps xmm9, [oldSHAVITE_MESS+16]");
    asm ("movaps xmm10, [oldSHAVITE_MESS+32]");
    asm ("movaps xmm11, [oldSHAVITE_MESS+48]");

    /* xmm8..xmm11 = rk[0..15] */

    /* start key schedule */
    asm ("movaps xmm12, xmm8");
    asm ("movaps xmm13, xmm9");
    asm ("movaps xmm14, xmm10");
    asm ("movaps xmm15, xmm11");

    rev_reg_0321(12);
    rev_reg_0321(13);
    rev_reg_0321(14);
    rev_reg_0321(15);
    replace_aes(12, 2);
    replace_aes(13, 2);
    replace_aes(14, 2);
    replace_aes(15, 2);

    asm ("pxor xmm12, xmm3");
    asm ("pxor xmm12, xmm4");
    asm ("movaps xmm4, [oldSHAVITE256_XOR3]");
    asm ("pxor xmm12, xmm11");
    asm ("pxor xmm13, xmm12");
    asm ("pxor xmm14, xmm13");
    asm ("pxor xmm15, xmm14");
    /* xmm12..xmm15 = rk[16..31] */

    /* F3 - first round */

    asm ("movaps xmm6, xmm8");
    asm ("pxor xmm8, xmm1");
    replace_aes(8, 9);
    replace_aes(8, 10);
    replace_aes(8, 2);
    asm ("pxor xmm0, xmm8");
    asm ("movaps xmm8, xmm6");

    /* F3 - second round */

    asm ("movaps xmm6, xmm11");
    asm ("pxor xmm11, xmm0");
    replace_aes(11, 12);
    replace_aes(11, 13);
    replace_aes(11, 2);
    asm ("pxor xmm1, xmm11");
    asm ("movaps xmm11, xmm6");

    /* key schedule */
    oldmixing();

    /* xmm8..xmm11 - rk[32..47] */

    /* F3 - third round */
    asm ("movaps xmm6, xmm14");
    asm ("pxor xmm14, xmm1");
    replace_aes(14, 15);
    replace_aes(14, 8);
    replace_aes(14, 2);
    asm ("pxor xmm0, xmm14");
    asm ("movaps xmm14, xmm6");

    /* key schedule */

    asm ("pshufd xmm3, xmm3,135");

    asm ("movaps xmm12, xmm8");
    asm ("movaps xmm13, xmm9");
    asm ("movaps xmm14, xmm10");
    asm ("movaps xmm15, xmm11");
    rev_reg_0321(12);
    rev_reg_0321(13);
    rev_reg_0321(14);
    rev_reg_0321(15);
    replace_aes(12, 2);
    replace_aes(13, 2);
    replace_aes(14, 2);
    replace_aes(15, 2);

    asm ("pxor xmm12, xmm11");
    asm ("pxor xmm14, xmm3");
    asm ("pxor xmm14, xmm4");
    asm ("movaps xmm4, [oldSHAVITE256_XOR4]");
    asm ("pxor xmm13, xmm12");
    asm ("pxor xmm14, xmm13");
    asm ("pxor xmm15, xmm14");

    /* xmm12..xmm15 - rk[48..63] */

    /* F3 - fourth round */
    asm ("movaps xmm6, xmm9");
    asm ("pxor xmm9, xmm0");
    replace_aes(9, 10);
    replace_aes(9, 11);
    replace_aes(9, 2);
    asm ("pxor xmm1, xmm9");
    asm ("movaps xmm9, xmm6");

    /* key schedule */
    oldmixing();
    /* xmm8..xmm11 = rk[64..79] */

    /* F3 - fifth round */
    asm ("movaps xmm6, xmm12");
    asm ("pxor xmm12, xmm1");
    replace_aes(12, 13);
    replace_aes(12, 14);
    replace_aes(12, 2);
    asm ("pxor xmm0, xmm12");
    asm ("movaps xmm12, xmm6");

    /* F3 - sixth round */
    asm ("movaps xmm6, xmm15");
    asm ("pxor xmm15, xmm0");
    replace_aes(15, 8);
    replace_aes(15, 9);
    replace_aes(15, 2);
    asm ("pxor xmm1, xmm15");
    asm ("movaps xmm15, xmm6");

    /* key schedule */
    asm ("pshufd xmm3, xmm3, 147");

    asm ("movaps xmm12, xmm8");
    asm ("movaps xmm13, xmm9");
    asm ("movaps xmm14, xmm10");
    asm ("movaps xmm15, xmm11");
    rev_reg_0321(12);
    rev_reg_0321(13);
    rev_reg_0321(14);
    rev_reg_0321(15);
    replace_aes(12, 2);
    replace_aes(13, 2);
    replace_aes(14, 2);
    replace_aes(15, 2);
    asm ("pxor xmm12, xmm11");
    asm ("pxor xmm13, xmm3");
    asm ("pxor xmm13, xmm4");
    asm ("pxor xmm13, xmm12");
    asm ("pxor xmm14, xmm13");
    asm ("pxor xmm15, xmm14");

    /* xmm12..xmm15 = rk[80..95] */

    /* F3 - seventh round */
    asm ("movaps xmm6, xmm10");
    asm ("pxor xmm10, xmm1");
    replace_aes(10, 11);
    replace_aes(10, 12);
    replace_aes(10, 2);
    asm ("pxor xmm0, xmm10");
    asm ("movaps xmm10, xmm6");

    /* key schedule */
    oldmixing();

    /* xmm8..xmm11 = rk[96..111] */

    /* F3 - eigth round */
    asm ("movaps xmm6, xmm13");
    asm ("pxor xmm13, xmm0");
    replace_aes(13, 14);
    replace_aes(13, 15);
    replace_aes(13, 2);
    asm ("pxor xmm1, xmm13");
    asm ("movaps xmm13, xmm6");

    /* key schedule */
    asm ("pshufd xmm3, xmm3, 135");

    asm ("movaps xmm12, xmm8");
    asm ("movaps xmm13, xmm9");
    asm ("movaps xmm14, xmm10");
    asm ("movaps xmm15, xmm11");
    rev_reg_0321(12);
    rev_reg_0321(13);
    rev_reg_0321(14);
    rev_reg_0321(15);
    replace_aes(12, 2);
    replace_aes(13, 2);
    replace_aes(14, 2);
    replace_aes(15, 2);
    asm ("pxor xmm12, xmm11");
    asm ("pxor xmm15, xmm3");
    asm ("pxor xmm15, xmm4");
    asm ("pxor xmm13, xmm12");
    asm ("pxor xmm14, xmm13");
    asm ("pxor xmm15, xmm14");

    /* xmm12..xmm15 = rk[112..127] */

    /* F3 - ninth round */
    asm ("movaps xmm6, xmm8");
    asm ("pxor xmm8, xmm1");
    replace_aes(8, 9);
    replace_aes(8, 10);
    replace_aes(8, 2);
    asm ("pxor xmm0, xmm8");
    asm ("movaps xmm8, xmm6");
    /* F3 - tenth round */
    asm ("movaps xmm6, xmm11");
    asm ("pxor xmm11, xmm0");
    replace_aes(11, 12);
    replace_aes(11, 13);
    replace_aes(11, 2);
    asm ("pxor xmm1, xmm11");
    asm ("movaps xmm11, xmm6");

    /* key schedule */
    oldmixing();

    /* xmm8..xmm11 = rk[128..143] */

    /* F3 - eleventh round */
    asm ("movaps xmm6, xmm14");
    asm ("pxor xmm14, xmm1");
    replace_aes(14, 15);
    replace_aes(14, 8);
    replace_aes(14, 2);
    asm ("pxor xmm0, xmm14");
    asm ("movaps xmm14, xmm6");

    /* F3 - twelfth round */
    asm ("movaps xmm6, xmm9");
    asm ("pxor xmm9, xmm0");
    replace_aes(9, 10);
    replace_aes(9, 11);
    replace_aes(9, 2);
    asm ("pxor xmm1, xmm9");
    asm ("movaps xmm9, xmm6");

    /* feedforward */
    asm ("pxor xmm0, [oldSHAVITE_PTXT]");
    asm ("pxor xmm1, [oldSHAVITE_PTXT+16]");
    asm ("movaps [oldSHAVITE_PTXT], xmm0");
    asm ("movaps [oldSHAVITE_PTXT+16], xmm1");
    asm (".att_syntax noprefix");

    return;
}

void oldCompress256(const unsigned char *message_block, unsigned char *chaining_value, unsigned long long counter,
    const unsigned char salt[32])
{
    int i, j;

    for (i=0;i<8*4;i++)
        oldSHAVITE_PTXT[i]=chaining_value[i];

     for (i=0;i<16;i++)
        oldSHAVITE_MESS[i] = *((unsigned int*)(message_block+4*i));

    oldSHAVITE_CNTS[0] = (unsigned int)(counter & 0xFFFFFFFFULL);
    oldSHAVITE_CNTS[1] = (unsigned int)(counter>>32);
    /* encryption + Davies-Meyer transform */
    oldE256();

    for (i=0; i<4*8; i++)
        chaining_value[i]=oldSHAVITE_PTXT[i];

     return;
}

////////////////////////////////

__attribute__ ((aligned (16))) unsigned int SHAVITE_MESS[16];
__attribute__ ((aligned (16))) unsigned char SHAVITE_PTXT[8*4];
__attribute__ ((aligned (16))) unsigned int SHAVITE_CNTS[4] = {0,0,0,0};
__attribute__ ((aligned (16))) unsigned int SHAVITE_REVERSE[4] = {0x07060504, 0x0b0a0908, 0x0f0e0d0c, 0x03020100 };
__attribute__ ((aligned (16))) unsigned int SHAVITE256_XOR2[4] = {0x0, 0xFFFFFFFF, 0x0, 0x0};
__attribute__ ((aligned (16))) unsigned int SHAVITE256_XOR3[4] = {0x0, 0x0, 0xFFFFFFFF, 0x0};
__attribute__ ((aligned (16))) unsigned int SHAVITE256_XOR4[4] = {0x0, 0x0, 0x0, 0xFFFFFFFF};

#define mixing() do {\
    x11 = x15; \
    x10 = x14; \
    x9 = x13;\
    x8 = x12;\
\
    x6 = x11;\
    x6 = _mm_srli_si128(x6, 4);\
    x8 = _mm_xor_si128(x8, x6);\
    x6 = x8;\
    x6 = _mm_slli_si128(x6, 12);\
    x8 = _mm_xor_si128(x8, x6);\
\
    x7 = x8;\
    x7 = _mm_srli_si128(x7, 4);\
    x9 = _mm_xor_si128(x9, x7);\
    x7 = x9;\
    x7 = _mm_slli_si128(x7, 12);\
    x9 = _mm_xor_si128(x9, x7);\
\
    x6 = x9;\
    x6 = _mm_srli_si128(x6, 4);\
    x10 = _mm_xor_si128(x10, x6);\
    x6 = x10;\
    x6 = _mm_slli_si128(x6, 12);\
    x10 = _mm_xor_si128(x10, x6);\
\
    x7 = x10;\
    x7 = _mm_srli_si128(x7, 4);\
    x11 = _mm_xor_si128(x11, x7);\
    x7 = x11;\
    x7 = _mm_slli_si128(x7, 12);\
    x11 = _mm_xor_si128(x11, x7);\
} while(0);

void E256()
{
    __m128i x0;
    __m128i x1;
    __m128i x2;
    __m128i x3;
    __m128i x4;
    __m128i x5;
    __m128i x6;
    __m128i x7;
    __m128i x8;
    __m128i x9;
    __m128i x10;
    __m128i x11;
    __m128i x12;
    __m128i x13;
    __m128i x14;
    __m128i x15;

    /* (L,R) = (xmm0,xmm1) */
    const __m128i ptxt1 = _mm_loadu_si128((const __m128i*)SHAVITE_PTXT);
    const __m128i ptxt2 = _mm_loadu_si128((const __m128i*)(SHAVITE_PTXT+16));

    x0 = ptxt1;
    x1 = ptxt2;

    x3 = _mm_loadu_si128((__m128i*)SHAVITE_CNTS);
    x4 = _mm_loadu_si128((__m128i*)SHAVITE256_XOR2);
    x2 = _mm_setzero_si128();

    /* init key schedule */
    x8 = _mm_loadu_si128((__m128i*)SHAVITE_MESS);
    x9 = _mm_loadu_si128((__m128i*)(SHAVITE_MESS+4));
    x10 = _mm_loadu_si128((__m128i*)(SHAVITE_MESS+8));
    x11 = _mm_loadu_si128((__m128i*)(SHAVITE_MESS+12));

    /* xmm8..xmm11 = rk[0..15] */

    /* start key schedule */
    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;

const __m128i xtemp = _mm_loadu_si128((__m128i*)SHAVITE_REVERSE);
    x12 = _mm_shuffle_epi8(x12, xtemp);
    x13 = _mm_shuffle_epi8(x13, xtemp);
    x14 = _mm_shuffle_epi8(x14, xtemp);
    x15 = _mm_shuffle_epi8(x15, xtemp);

    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);

    x12 = _mm_xor_si128(x12, x3);
    x12 = _mm_xor_si128(x12, x4);
    x4 = _mm_loadu_si128((__m128i*)SHAVITE256_XOR3);
    x12 = _mm_xor_si128(x12, x11);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);
    /* xmm12..xmm15 = rk[16..31] */

    /* F3 - first round */

    x6 = x8;
    x8 = _mm_xor_si128(x8, x1);
    x8 = _mm_aesenc_si128(x8, x9);
    x8 = _mm_aesenc_si128(x8, x10);
    x8 = _mm_aesenc_si128(x8, x2);
    x0 = _mm_xor_si128(x0, x8);
    x8 = x6;

    /* F3 - second round */

    x6 = x11;
    x11 = _mm_xor_si128(x11, x0);
    x11 = _mm_aesenc_si128(x11, x12);
    x11 = _mm_aesenc_si128(x11, x13);
    x11 = _mm_aesenc_si128(x11, x2);
    x1 = _mm_xor_si128(x1, x11);
    x11 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 - rk[32..47] */

    /* F3 - third round */
    x6 = x14;
    x14 = _mm_xor_si128(x14, x1);
    x14 = _mm_aesenc_si128(x14, x15);
    x14 = _mm_aesenc_si128(x14, x8);
    x14 = _mm_aesenc_si128(x14, x2);
    x0 = _mm_xor_si128(x0, x14);
    x14 = x6;

    /* key schedule */

    x3 = _mm_shuffle_epi32(x3, 135);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, xtemp);
    x13 = _mm_shuffle_epi8(x13, xtemp);
    x14 = _mm_shuffle_epi8(x14, xtemp);
    x15 = _mm_shuffle_epi8(x15, xtemp);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);

    x12 = _mm_xor_si128(x12, x11);
    x14 = _mm_xor_si128(x14, x3);
    x14 = _mm_xor_si128(x14, x4);
    x4 = _mm_loadu_si128((__m128i*)SHAVITE256_XOR4);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 - rk[48..63] */

    /* F3 - fourth round */
    x6 = x9;
    x9 = _mm_xor_si128(x9, x0);
    x9 = _mm_aesenc_si128(x9, x10);
    x9 = _mm_aesenc_si128(x9, x11);
    x9 = _mm_aesenc_si128(x9, x2);
    x1 = _mm_xor_si128(x1, x9);
    x9 = x6;

    /* key schedule */
    mixing();
    /* xmm8..xmm11 = rk[64..79] */

    /* F3 - fifth round */
    x6 = x12;
    x12 = _mm_xor_si128(x12, x1);
    x12 = _mm_aesenc_si128(x12, x13);
    x12 = _mm_aesenc_si128(x12, x14);
    x12 = _mm_aesenc_si128(x12, x2);
    x0 = _mm_xor_si128(x0, x12);
    x12 = x6;

    /* F3 - sixth round */
    x6 = x15;
    x15 = _mm_xor_si128(x15, x0);
    x15 = _mm_aesenc_si128(x15, x8);
    x15 = _mm_aesenc_si128(x15, x9);
    x15 = _mm_aesenc_si128(x15, x2);
    x1 = _mm_xor_si128(x1, x15);
    x15 = x6;

    /* key schedule */
    x3 = _mm_shuffle_epi32(x3, 147);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, xtemp);
    x13 = _mm_shuffle_epi8(x13, xtemp);
    x14 = _mm_shuffle_epi8(x14, xtemp);
    x15 = _mm_shuffle_epi8(x15, xtemp);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);
    x12 = _mm_xor_si128(x12, x11);
    x13 = _mm_xor_si128(x13, x3);
    x13 = _mm_xor_si128(x13, x4);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 = rk[80..95] */

    /* F3 - seventh round */
    x6 = x10;
    x10 = _mm_xor_si128(x10, x1);
    x10 = _mm_aesenc_si128(x10, x11);
    x10 = _mm_aesenc_si128(x10, x12);
    x10 = _mm_aesenc_si128(x10, x2);
    x0 = _mm_xor_si128(x0, x10);
    x10 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 = rk[96..111] */

    /* F3 - eigth round */
    x6 = x13;
    x13 = _mm_xor_si128(x13, x0);
    x13 = _mm_aesenc_si128(x13, x14);
    x13 = _mm_aesenc_si128(x13, x15);
    x13 = _mm_aesenc_si128(x13, x2);
    x1 = _mm_xor_si128(x1, x13);
    x13 = x6;

    /* key schedule */
    x3 = _mm_shuffle_epi32(x3, 135);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, xtemp);
    x13 = _mm_shuffle_epi8(x13, xtemp);
    x14 = _mm_shuffle_epi8(x14, xtemp);
    x15 = _mm_shuffle_epi8(x15, xtemp);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);
    x12 = _mm_xor_si128(x12, x11);
    x15 = _mm_xor_si128(x15, x3);
    x15 = _mm_xor_si128(x15, x4);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 = rk[112..127] */

    /* F3 - ninth round */
    x6 = x8;
    x8 = _mm_xor_si128(x8, x1);
    x8 = _mm_aesenc_si128(x8, x9);
    x8 = _mm_aesenc_si128(x8, x10);
    x8 = _mm_aesenc_si128(x8, x2);
    x0 = _mm_xor_si128(x0, x8);
    x8 = x6;
    /* F3 - tenth round */
    x6 = x11;
    x11 = _mm_xor_si128(x11, x0);
    x11 = _mm_aesenc_si128(x11, x12);
    x11 = _mm_aesenc_si128(x11, x13);
    x11 = _mm_aesenc_si128(x11, x2);
    x1 = _mm_xor_si128(x1, x11);
    x11 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 = rk[128..143] */

    /* F3 - eleventh round */
    x6 = x14;
    x14 = _mm_xor_si128(x14, x1);
    x14 = _mm_aesenc_si128(x14, x15);
    x14 = _mm_aesenc_si128(x14, x8);
    x14 = _mm_aesenc_si128(x14, x2);
    x0 = _mm_xor_si128(x0, x14);
    x14 = x6;

    /* F3 - twelfth round */
    x6 = x9;
    x9 = _mm_xor_si128(x9, x0);
    x9 = _mm_aesenc_si128(x9, x10);
    x9 = _mm_aesenc_si128(x9, x11);
    x9 = _mm_aesenc_si128(x9, x2);
    x1 = _mm_xor_si128(x1, x9);
    x9 = x6;

    /* feedforward */
    x0 = _mm_xor_si128(x0, ptxt1);
    x1 = _mm_xor_si128(x1, ptxt2);
    _mm_storeu_si128((__m128i *)SHAVITE_PTXT, x0);
    _mm_storeu_si128((__m128i *)(SHAVITE_PTXT + 16), x1);

    return;
}

void Compress256(const unsigned char *message_block, unsigned char *chaining_value, unsigned long long counter,
    const unsigned char salt[32])
{
    int i, j;

    for (i=0;i<8*4;i++)
        SHAVITE_PTXT[i]=chaining_value[i];

    for (i=0;i<16;i++)
        SHAVITE_MESS[i] = *((unsigned int*)(message_block+4*i));

    SHAVITE_CNTS[0] = (unsigned int)(counter & 0xFFFFFFFFULL);
    SHAVITE_CNTS[1] = (unsigned int)(counter>>32);
    /* encryption + Davies-Meyer transform */
    E256();

    for (i=0; i<4*8; i++)
        chaining_value[i]=SHAVITE_PTXT[i];

     return;
}

int main(int argc, char *argv[])
{
    const int cvlen = 32;
    unsigned char *cv = (unsigned char *)malloc(cvlen);

    for (int x=0; x < cvlen; x++)
        cv[x] = x + argc;

    const int mblen = 64;
    unsigned char *mb = (unsigned char *)malloc(mblen);

    for (int x=0; x < mblen; x++)
        mb[x] = x + argc;

    unsigned long long counter = 0x1234567812345678ull;

    unsigned char s[32] = {0};
    oldCompress256(mb, cv, counter, s);

    printf("old: ");
    for (int x=0; x < cvlen; x++)
        printf("%2x ", cv[x]);
    printf("\n");

    for (int x=0; x < cvlen; x++)
        cv[x] = x + argc;

    Compress256(mb, cv, counter, s);

    printf("new: ");
    for (int x=0; x < cvlen; x++)
        printf("%2x ", cv[x]);
    printf("\n");
}

编辑:

全局变量仅用于在C和asm之间传递值。也许汇编程序员不知道如何访问参数?无论如何,它们是不必要的(并且是线程安全问题的源头)。以下是没有这些变量的代码(以及一些外观上的更改):

最初的回答:

#include <x86intrin.h>
#include <stdio.h>
#include <time.h>

#define mixing() \
    x11 = x15;\
    x10 = x14;\
    x9 = x13;\
    x8 = x12;\
\
    x6 = x11;\
    x6 = _mm_srli_si128(x6, 4);\
    x8 = _mm_xor_si128(x8, x6);\
    x6 = x8;\
    x6 = _mm_slli_si128(x6, 12);\
    x8 = _mm_xor_si128(x8, x6);\
\
    x7 = x8;\
    x7 = _mm_srli_si128(x7, 4);\
    x9 = _mm_xor_si128(x9, x7);\
    x7 = x9;\
    x7 = _mm_slli_si128(x7, 12);\
    x9 = _mm_xor_si128(x9, x7);\
\
    x6 = x9;\
    x6 = _mm_srli_si128(x6, 4);\
    x10 = _mm_xor_si128(x10, x6);\
    x6 = x10;\
    x6 = _mm_slli_si128(x6, 12);\
    x10 = _mm_xor_si128(x10, x6);\
\
    x7 = x10;\
    x7 = _mm_srli_si128(x7, 4);\
    x11 = _mm_xor_si128(x11, x7);\
    x7 = x11;\
    x7 = _mm_slli_si128(x7, 12);\
    x11 = _mm_xor_si128(x11, x7);

// If mess & chain won't be 16byte aligned, change _mm_load to _mm_loadu and
// _mm_store to _mm_storeu
void Compress256(const __m128i *mess, __m128i *chain, unsigned long long counter, const unsigned char salt[32])
{
    // note: _mm_set_epi32 uses (int e3, int e2, int e1, int e0)
    const __m128i SHAVITE_REVERSE = _mm_set_epi32(0x03020100, 0x0f0e0d0c, 0x0b0a0908, 0x07060504);
    const __m128i SHAVITE256_XOR2 = _mm_set_epi32(0x0, 0x0, 0xFFFFFFFF, 0x0);
    const __m128i SHAVITE256_XOR3 = _mm_set_epi32(0x0, 0xFFFFFFFF, 0x0, 0x0);
    const __m128i SHAVITE256_XOR4 = _mm_set_epi32(0xFFFFFFFF, 0x0, 0x0, 0x0);
    const __m128i SHAVITE_CNTS =
        _mm_set_epi32(0, 0, (unsigned int)(counter>>32), (unsigned int)(counter & 0xFFFFFFFFULL));

    __m128i x0, x1, x2, x3, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15;

    /* (L,R) = (xmm0,xmm1) */
    const __m128i ptxt1 = _mm_load_si128(chain);
    const __m128i ptxt2 = _mm_load_si128(chain+1);

    x0 = ptxt1;
    x1 = ptxt2;

    x3 = SHAVITE_CNTS;
    x2 = _mm_setzero_si128();

    /* init key schedule */
    x8 = _mm_load_si128(mess);
    x9 = _mm_load_si128(mess+1);
    x10 = _mm_load_si128(mess+2);
    x11 = _mm_load_si128(mess+3);

    /* xmm8..xmm11 = rk[0..15] */

    /* start key schedule */
    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;

    x12 = _mm_shuffle_epi8(x12, SHAVITE_REVERSE);
    x13 = _mm_shuffle_epi8(x13, SHAVITE_REVERSE);
    x14 = _mm_shuffle_epi8(x14, SHAVITE_REVERSE);
    x15 = _mm_shuffle_epi8(x15, SHAVITE_REVERSE);

    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);

    x12 = _mm_xor_si128(x12, x3);
    x12 = _mm_xor_si128(x12, SHAVITE256_XOR2);
    x12 = _mm_xor_si128(x12, x11);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 = rk[16..31] */

    /* F3 - first round */
    x6 = x8;
    x8 = _mm_xor_si128(x8, x1);
    x8 = _mm_aesenc_si128(x8, x9);
    x8 = _mm_aesenc_si128(x8, x10);
    x8 = _mm_aesenc_si128(x8, x2);
    x0 = _mm_xor_si128(x0, x8);
    x8 = x6;

    /* F3 - second round */
    x6 = x11;
    x11 = _mm_xor_si128(x11, x0);
    x11 = _mm_aesenc_si128(x11, x12);
    x11 = _mm_aesenc_si128(x11, x13);
    x11 = _mm_aesenc_si128(x11, x2);
    x1 = _mm_xor_si128(x1, x11);
    x11 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 - rk[32..47] */

    /* F3 - third round */
    x6 = x14;
    x14 = _mm_xor_si128(x14, x1);
    x14 = _mm_aesenc_si128(x14, x15);
    x14 = _mm_aesenc_si128(x14, x8);
    x14 = _mm_aesenc_si128(x14, x2);
    x0 = _mm_xor_si128(x0, x14);
    x14 = x6;

    /* key schedule */
    x3 = _mm_shuffle_epi32(x3, 135);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, SHAVITE_REVERSE);
    x13 = _mm_shuffle_epi8(x13, SHAVITE_REVERSE);
    x14 = _mm_shuffle_epi8(x14, SHAVITE_REVERSE);
    x15 = _mm_shuffle_epi8(x15, SHAVITE_REVERSE);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);

    x12 = _mm_xor_si128(x12, x11);
    x14 = _mm_xor_si128(x14, x3);
    x14 = _mm_xor_si128(x14, SHAVITE256_XOR3);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 - rk[48..63] */

    /* F3 - fourth round */
    x6 = x9;
    x9 = _mm_xor_si128(x9, x0);
    x9 = _mm_aesenc_si128(x9, x10);
    x9 = _mm_aesenc_si128(x9, x11);
    x9 = _mm_aesenc_si128(x9, x2);
    x1 = _mm_xor_si128(x1, x9);
    x9 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 = rk[64..79] */

    /* F3 - fifth round */
    x6 = x12;
    x12 = _mm_xor_si128(x12, x1);
    x12 = _mm_aesenc_si128(x12, x13);
    x12 = _mm_aesenc_si128(x12, x14);
    x12 = _mm_aesenc_si128(x12, x2);
    x0 = _mm_xor_si128(x0, x12);
    x12 = x6;

    /* F3 - sixth round */
    x6 = x15;
    x15 = _mm_xor_si128(x15, x0);
    x15 = _mm_aesenc_si128(x15, x8);
    x15 = _mm_aesenc_si128(x15, x9);
    x15 = _mm_aesenc_si128(x15, x2);
    x1 = _mm_xor_si128(x1, x15);
    x15 = x6;

    /* key schedule */
    x3 = _mm_shuffle_epi32(x3, 147);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, SHAVITE_REVERSE);
    x13 = _mm_shuffle_epi8(x13, SHAVITE_REVERSE);
    x14 = _mm_shuffle_epi8(x14, SHAVITE_REVERSE);
    x15 = _mm_shuffle_epi8(x15, SHAVITE_REVERSE);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);
    x12 = _mm_xor_si128(x12, x11);
    x13 = _mm_xor_si128(x13, x3);
    x13 = _mm_xor_si128(x13, SHAVITE256_XOR4);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 = rk[80..95] */

    /* F3 - seventh round */
    x6 = x10;
    x10 = _mm_xor_si128(x10, x1);
    x10 = _mm_aesenc_si128(x10, x11);
    x10 = _mm_aesenc_si128(x10, x12);
    x10 = _mm_aesenc_si128(x10, x2);
    x0 = _mm_xor_si128(x0, x10);
    x10 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 = rk[96..111] */

    /* F3 - eigth round */
    x6 = x13;
    x13 = _mm_xor_si128(x13, x0);
    x13 = _mm_aesenc_si128(x13, x14);
    x13 = _mm_aesenc_si128(x13, x15);
    x13 = _mm_aesenc_si128(x13, x2);
    x1 = _mm_xor_si128(x1, x13);
    x13 = x6;

    /* key schedule */
    x3 = _mm_shuffle_epi32(x3, 135);

    x12 = x8;
    x13 = x9;
    x14 = x10;
    x15 = x11;
    x12 = _mm_shuffle_epi8(x12, SHAVITE_REVERSE);
    x13 = _mm_shuffle_epi8(x13, SHAVITE_REVERSE);
    x14 = _mm_shuffle_epi8(x14, SHAVITE_REVERSE);
    x15 = _mm_shuffle_epi8(x15, SHAVITE_REVERSE);
    x12 = _mm_aesenc_si128(x12, x2);
    x13 = _mm_aesenc_si128(x13, x2);
    x14 = _mm_aesenc_si128(x14, x2);
    x15 = _mm_aesenc_si128(x15, x2);
    x12 = _mm_xor_si128(x12, x11);
    x15 = _mm_xor_si128(x15, x3);
    x15 = _mm_xor_si128(x15, SHAVITE256_XOR4);
    x13 = _mm_xor_si128(x13, x12);
    x14 = _mm_xor_si128(x14, x13);
    x15 = _mm_xor_si128(x15, x14);

    /* xmm12..xmm15 = rk[112..127] */

    /* F3 - ninth round */
    x6 = x8;
    x8 = _mm_xor_si128(x8, x1);
    x8 = _mm_aesenc_si128(x8, x9);
    x8 = _mm_aesenc_si128(x8, x10);
    x8 = _mm_aesenc_si128(x8, x2);
    x0 = _mm_xor_si128(x0, x8);
    x8 = x6;

    /* F3 - tenth round */
    x6 = x11;
    x11 = _mm_xor_si128(x11, x0);
    x11 = _mm_aesenc_si128(x11, x12);
    x11 = _mm_aesenc_si128(x11, x13);
    x11 = _mm_aesenc_si128(x11, x2);
    x1 = _mm_xor_si128(x1, x11);
    x11 = x6;

    /* key schedule */
    mixing();

    /* xmm8..xmm11 = rk[128..143] */

    /* F3 - eleventh round */
    x6 = x14;
    x14 = _mm_xor_si128(x14, x1);
    x14 = _mm_aesenc_si128(x14, x15);
    x14 = _mm_aesenc_si128(x14, x8);
    x14 = _mm_aesenc_si128(x14, x2);
    x0 = _mm_xor_si128(x0, x14);
    x14 = x6;

    /* F3 - twelfth round */
    x6 = x9;
    x9 = _mm_xor_si128(x9, x0);
    x9 = _mm_aesenc_si128(x9, x10);
    x9 = _mm_aesenc_si128(x9, x11);
    x9 = _mm_aesenc_si128(x9, x2);
    x1 = _mm_xor_si128(x1, x9);
    x9 = x6;

    /* feedforward */
    x0 = _mm_xor_si128(x0, ptxt1);
    x1 = _mm_xor_si128(x1, ptxt2);
    _mm_store_si128(chain, x0);
    _mm_store_si128(chain + 1, x1);
}

int main(int argc, char *argv[])
{
    __m128i chain[2], mess[4];
    unsigned char *p;

    // argc prevents compiler from precalculating results

    p = (unsigned char *)mess;
    for (int x=0; x < 64; x++)
        p[x] = x + argc;

    p = (unsigned char *)chain;
    for (int x=0; x < 32; x++)
        p[x] = x + argc;

    unsigned long long counter = 0x1234567812345678ull + argc;

    // Unused, but prototype requires it.
    unsigned char s[32] = {0};

    Compress256(mess, chain, counter, s);

    for (int x=0; x < 32; x++)
        printf("%02x ", p[x]);
    printf("\n");

    struct timespec start, end;
    clock_gettime(CLOCK_MONOTONIC, &start);

    unsigned char res = 0;

    for (int x=0; x < 400000; x++)
    {
        Compress256(mess, chain, counter, s);

        // Ensure optimizer doesn't omit the calc
        res ^= *p;
    }
    clock_gettime(CLOCK_MONOTONIC, &end);

    unsigned long long delta_us = (end.tv_sec - start.tv_sec) * 1000000ull + (end.tv_nsec - start.tv_nsec) / 1000ull;
    printf("%x: %llu\n", res, delta_us);
}

1
是的,内置函数很棒,让您利用AVX来避免“movdqa”寄存器拷贝指令,并使用非对齐内存源操作数。由于您只使用128位整数向量,因此您只需要使用“-mavx”即可获得此效益。(在其他情况下,当优化128位内置函数时,AVX2的“vpblendd”可能对编译器有用,但在这里可能不是。)如果要告诉优化器针对更新的CPU进行调整,则需要“-mtune = haswell”或“-mtune = znver1”;“-mavx2”遗憾地不能使其停止关注Sandybridge,Core 2或Phenom II。 - Peter Cordes
可以确认这个工作正常,做得很好!基准测试显示,在(O3、ssse3、aes)和通过函数调用饱和所有核心的情况下,与asm代码相比,获得了约6%的增益。升级到sse4.2或avx2似乎会略微提高性能,但不是很显著,这并不出乎意料,因为aes调用可能占据了大部分执行时间。不过我还会再试一下。我不确定应该选择哪个答案,@PeterCordes的答案更好地回答了原始汇编问题,而这个答案更好地解决了我的实际问题,两个答案都是很好的详细答案。 - Malcolm MacLeod
1
@MalcolmMacLeod:我在这个答案中添加了一个简短的介绍,以直接回答问题并介绍它是关于如何转换为内部函数。我认为你应该接受这个答案,因为如果人们一开始就正确地使用内联汇编(“m”操作数),他们就不需要我的答案;它已经可以正常工作了。 - Peter Cordes
1
看起来那些全局声明(SHAVITE)只能用于将值传递到汇编语言中。也许作者不确定如何从汇编语言中访问函数参数或局部变量?现在它是C语言,将SHAVITE_REVERSE和SHAVITE_XOR作为const局部变量放在E256()内部,并将这些值作为参数传递给Compress256(即跳过冗余的memcpy来回传递全局变量),似乎可以提高一点性能。而且(我期望)也解决了您最初的线程安全问题。假设您还没有这样做... - David Wohlferd
1
哦,对我来说更快了,但也许你的“写入全局”被折叠到你用于构建消息的例程中。有一次我不小心使用了x86编译(只有8个xmm寄存器而不是16个),但性能没有改变。就像你说的,aesenc似乎占主导地位。不过,更干净更好,而且(我假设)你的线程问题已经解决了。我将使用我的“最终”代码更新这个。听起来你的代码在功能上是相同的,所以可能没有必要改变它。但我想把它放在这里供未来的SO用户使用。我的意思是,谁知道世界上有多少Shavite压缩用户呢? - David Wohlferd
显示剩余2条评论

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接