JMH微基准测试递归快速排序

4

您好,我正在尝试对各种排序算法进行微基准测试,并且在使用jmh和快速排序进行基准测试时遇到了奇怪的问题。也许我的实现有问题。如果有人能帮助我看看问题出在哪里,我会很感兴趣。首先,我使用ubuntu 14.04和jdk 7以及jmh 0.9.1。以下是我尝试进行基准测试的方式:

@OutputTimeUnit(TimeUnit.MILLISECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 3, time = 1)
@State(Scope.Thread)
public class SortingBenchmark {

private int length = 100000;

private Distribution distribution = Distribution.RANDOM;

private int[] array;

int i = 1;

@Setup(Level.Iteration)
public void setUp() {
    array = distribution.create(length);
}

@Benchmark
public int timeQuickSort() {
    int[] sorted = Sorter.quickSort(array);
    return sorted[i];
}

@Benchmark
public int timeJDKSort() {
    Arrays.sort(array);
    return array[i];
}

public static void main(String[] args) throws RunnerException {
    Options opt = new OptionsBuilder().include(".*" + SortingBenchmark.class.getSimpleName() + ".*").forks(1)
            .build();

    new Runner(opt).run();
}
}

还有其他算法,但它们或多或少都可以。现在快速排序由于某种原因非常慢。时间的数量级更慢!而且,我需要为其分配更多栈空间,以便在没有StackOverflowException的情况下运行。看起来由于某种原因快速排序只是做了很多递归调用。有趣的是,当我在我的主类中简单地运行算法时 - 它运行良好(使用相同的随机分布和100000个元素)。不需要增加堆栈大小,并且简单的nanotime基准测试显示的时间非常接近其他算法。并且在基准测试JDK sort中,在使用jmh进行测试时非常快,并且在使用朴素的nanotime基准测试时与其他算法更加一致。我这里做错了什么或者漏掉了什么吗? 下面是我的快速排序算法:

public static int[] quickSort(int[] data) {
    Sorter.quickSort(data, 0, data.length - 1);
    return data;
}
private static void quickSort(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    if (sublistFirstIndex < sublistLastIndex) {
        // move smaller elements before pivot and larger after
        int pivotIndex = partition(data, sublistFirstIndex, sublistLastIndex);
        // apply recursively to sub lists
        Sorter.quickSort(data, sublistFirstIndex, pivotIndex - 1);
        Sorter.quickSort(data, pivotIndex + 1, sublistLastIndex);
    }
}
private static int partition(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    int pivotElement = data[sublistLastIndex];
    int pivotIndex = sublistFirstIndex - 1;
    for (int i = sublistFirstIndex; i < sublistLastIndex; i++) {
        if (data[i] <= pivotElement) {
            pivotIndex++;
            ArrayUtils.swap(data, pivotIndex, i);
        }
    }
    ArrayUtils.swap(data, pivotIndex + 1, sublistLastIndex);
    return pivotIndex + 1; // return index of pivot element
}

现在我明白了,由于我的枢轴选择方式,如果我在已排序的数据上运行算法,它会非常慢(O(n^2))。但即使我尝试在我的主方法中运行已排序的数据,它仍然比在随机数据上使用jmh版本要快得多。我相信我错过了某些东西。您可以在此处找到其他算法的完整项目:https://github.com/ignl/SortingAlgos/

4
至少,Arrays.sort() 是原地排序算法,而且你只在第一次 @Benchmark 调用时进行排序。所有后续的调用都是在已排序的数组上操作。在 @Benchmark 中每次都从源代码执行 Array.copyOf() 吗? - Aleksey Shipilev
不,但我确实在setup方法中为每次迭代创建新的数组。 - Ignas
2
迭代是单个@Benchmark调用的序列。 - Aleksey Shipilev
嗯,我明白了。那么我应该在设置方法中使用Invocation级别还是在基准测试中复制数组?在基准测试中这样做不会影响结果吗(我试图避免这种情况)?一个迭代运行多少次调用? - Ignas
4
@Setup(Invocation) 会影响测试结果,因此更明智的选择是将复制成本计入@Benchmark中。迭代是有时间限制的,正如您从输出中看到的那样--因此,在每个迭代中发生多少次调用取决于调用持续时间。 - Aleksey Shipilev
1
好的,谢谢!由于我正在比较各种算法,所以在所有基准测试中都将执行数组复制。无论如何,如果jmh能够在每次调用时重新创建可变参数,那就太好了。非常感谢您的帮助,Aleksey! - Ignas
1个回答

3

好的,既然这里确实应该有一个答案(而不是必须通过问题下面的评论),我将它放在这里,因为我曾经被烧伤过。

JMH中的迭代是一批基准方法调用(取决于迭代的长度)。因此,在使用@Setup(Level.Iteration)时,只会在一系列调用的开始进行设置。由于数组在第一次调用后排序,因此在随后的调用中对最坏情况(已排序的数组)进行快速排序。这就是为什么需要很长时间或者堆栈溢出的原因。

因此,解决方案是使用@Setup(Level.Invocation)。但是,正如Javadoc中所述:

**
     * Invocation level: to be executed for each benchmark method execution.
     *
     * <p><b>WARNING: HERE BE DRAGONS! THIS IS A SHARP TOOL.
     * MAKE SURE YOU UNDERSTAND THE REASONING AND THE IMPLICATIONS
     * OF THE WARNINGS BELOW BEFORE EVEN CONSIDERING USING THIS LEVEL.</b></p>
     *
     * <p>This level is only usable for benchmarks taking more than a millisecond
     * per single {@link Benchmark} method invocation. It is a good idea to validate
     * the impact for your case on ad-hoc basis as well.</p>
     *
     * <p>WARNING #1: Since we have to subtract the setup/teardown costs from
     * the benchmark time, on this level, we have to timestamp *each* benchmark
     * invocation. If the benchmarked method is small, then we saturate the
     * system with timestamp requests, which introduce artificial latency,
     * throughput, and scalability bottlenecks.</p>
     *
     * <p>WARNING #2: Since we measure individual invocation timings with this
     * level, we probably set ourselves up for (coordinated) omission. That means
     * the hiccups in measurement can be hidden from timing measurement, and
     * can introduce surprising results. For example, when we use timings to
     * understand the benchmark throughput, the omitted timing measurement will
     * result in lower aggregate time, and fictionally *larger* throughput.</p>
     *
     * <p>WARNING #3: In order to maintain the same sharing behavior as other
     * Levels, we sometimes have to synchronize (arbitrage) the access to
     * {@link State} objects. Other levels do this outside the measurement,
     * but at this level, we have to synchronize on *critical path*, further
     * offsetting the measurement.</p>
     *
     * <p>WARNING #4: Current implementation allows the helper method execution
     * at this Level to overlap with the benchmark invocation itself in order
     * to simplify arbitrage. That matters in multi-threaded benchmarks, when
     * one worker thread executing {@link Benchmark} method may observe other
     * worker thread already calling {@link TearDown} for the same object.</p>
     */ 

如同Aleksey Shipilev建议的那样,将数组复制开销吸收到每个基准测试方法中。由于您正在比较相对性能,这不应影响您的结果。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接