XGBoost的predict_proba函数在推断性能上较慢。

6

我在同一份数据上使用Scikit-learn和XGBoost训练了2个梯度提升模型。

Scikit-learn模型

GradientBoostingClassifier(
    n_estimators=5,
    learning_rate=0.17,
    max_depth=5,
    verbose=2
)

XGBoost模型

XGBClassifier(
    n_estimators=5,
    learning_rate=0.17,
    max_depth=5,
    verbosity=2,
    eval_metric="logloss"
)

接下来我检查了推断性能:

  • XGBoost:每次循环9.7毫秒±84.6微秒
  • Scikit-learn:每次循环426微秒±12.5微秒

为什么XGBoost这么慢?


XGBClassifier()不是xgboost的包装器吗?尝试使用xgb.train(),例如:https://xgboost.readthedocs.io/en/latest/get_started.html - jared_mamrot
@jared_mamrot 是的,它是一个包装器。但是 xgb.train 如何与推断速度相关呢? - Alexander Ershov
2个回答

5
"XGBoost为什么这么慢?": XGBClassifier() 是XGBoost在scikit-learn中的API(详见https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier)。如果您直接调用函数(而不是通过API),它将更快。为了比较这两个函数的性能,最好分别直接调用每个函数,而不是一个直接调用,另一个通过API调用。以下是示例:
# benchmark_xgboost_vs_sklearn.py
# Adapted from `xgboost_test.py` by Jacob Schreiber 
# (https://gist.github.com/jmschrei/6b447aada61d631544cd)

"""
Benchmarking scripts for XGBoost versus sklearn (time and accuracy)
"""

import time
import random
import numpy as np
import xgboost as xgb
from sklearn.ensemble import GradientBoostingClassifier

random.seed(0)
np.random.seed(0)

def make_dataset(n=500, d=10, c=2, z=2):
    """
    Make a dataset of size n, with d dimensions and m classes,
    with a distance of z in each dimension, making each feature equally
    informative.
    """

    # Generate our data and our labels
    X = np.concatenate([np.random.randn(n, d) + z*i for i in range(c)])
    y = np.concatenate([np.ones(n) * i for i in range(c)])

    # Generate a random indexing
    idx = np.arange(n*c)
    np.random.shuffle(idx)

    # Randomize the dataset, preserving data-label pairing
    X = X[idx]
    y = y[idx]

    # Return x_train, x_test, y_train, y_test
    return X[::2], X[1::2], y[::2], y[1::2]

def main():
    """
    Run SKLearn, and then run xgboost,
    then xgboost via SKLearn XGBClassifier API wrapper
    """

    # Generate the dataset
    X_train, X_test, y_train, y_test = make_dataset(10, z=100)
    n_estimators=5
    max_depth=5
    learning_rate=0.17

    # sklearn first
    tic = time.time()
    clf = GradientBoostingClassifier(n_estimators=n_estimators,
        max_depth=max_depth, learning_rate=learning_rate)
    clf.fit(X_train, y_train)
    print("SKLearn GBClassifier: {}s".format(time.time() - tic))
    print("Acc: {}".format(clf.score(X_test, y_test)))
    print(y_test.sum())
    print(clf.predict(X_test))

    # Convert the data to DMatrix for xgboost
    dtrain = xgb.DMatrix(X_train, label=y_train)
    dtest  = xgb.DMatrix(X_test, label=y_test)
    # Loop through multiple thread numbers for xgboost
    for threads in 1, 2, 4:
        # xgboost's sklearn interface
        tic = time.time()
        clf = xgb.XGBModel(n_estimators=n_estimators, max_depth=max_depth,
            learning_rate=learning_rate, nthread=threads)
        clf.fit(X_train, y_train)
        print("SKLearn XGBoost API Time: {}s".format(time.time() - tic))
        preds = np.round( clf.predict(X_test) )
        acc = 1. - (np.abs(preds - y_test).sum()  / y_test.shape[0])
        print("Acc: {}".format( acc ))
        print("{} threads: ".format( threads ))
        tic = time.time()
        param = {
                  'max_depth' : max_depth,
                        'eta' : 0.1,
                      'silent': 1,
                   'objective':'binary:logistic',
                     'nthread': threads
                }
        bst = xgb.train( param, dtrain, n_estimators,
            [(dtest, 'eval'), (dtrain, 'train')] )
        print("XGBoost (no wrapper) Time: {}s".format(time.time() - tic))
        preds = np.round(bst.predict(dtest) )
        acc = 1. - (np.abs(preds - y_test).sum() / y_test.shape[0])
        print("Acc: {}".format(acc))

if __name__ == '__main__':
    main()

总结结果:

sklearn.ensemble.GradientBoostingClassifier()

  • 时间:0.003237009048461914秒
  • 准确率:1.0

sklearn xgboost API包装器XGBClassifier()

  • 时间:0.3436141014099121秒
  • 准确率:1.0

XGBoost(无包装器)xgb.train()

  • 时间:0.0028612613677978516秒
  • 准确率:1.0

0

您可以尝试使用带有英特尔优化的XGBoost来提高在英特尔硬件上的性能表现。 可通过以下方式安装英特尔优化的XGBoost:

  1. 作为Intel® AI Analytics Toolkit(https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit-download.html)的一部分

  2. 从PyPI仓库中,使用pip包管理器:pip install xgboost

  3. 从Anaconda包管理器中: 使用Intel渠道:conda install xgboost –c intel

    使用conda-forge渠道:conda install xgboost –c conda-forge

  4. 作为Docker容器(前提是您拥有DockerHub帐户)(https://hub.docker.com/r/intel/intel-optimized-ml/tags?page=1&name=xgboost

请查看链接https://www.intel.com/content/www/us/en/developer/articles/technical/easy-introduction-xgboost-for-intel-architecture.html,开始使用Intel优化的XGBoost。

1
我担心这并没有提供一个答案给问题,问题是关于为什么在特定条件下XGBoost如此缓慢,而不是如何在一般情况下加快速度。 - desertnaut

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接