我在修改一个玩具级别的scikit-learn gridsearchCV示例时,收到了警告:“UserWarning: One or more of the test scores are non-finite”。

17

我有以下代码,正常运行但出现了一个

UserWarning: One or more of the test scores are non-finite: [nan nan]
  category=UserWarning

当我将它修改为更简洁的版本(在随后的代码片段中显示)时,独热编码器的输出是否是问题的罪魁祸首?


import pandas as pd
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import RidgeClassifier
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.model_selection import GridSearchCV

train = pd.read_csv('/train.csv')
test = pd.read_csv('/test.csv')
sparse_features = [col for col in train.columns if col.startswith('cat')]
dense_features = [col for col in train.columns if col not in sparse_features+['target']]
X = train.drop(['target'], axis=1)
y = train['target'].values
skf = StratifiedKFold(n_splits=5)
clf = RidgeClassifier()

full_pipeline = ColumnTransformer(transformers=[
    ('num', StandardScaler(), dense_features),
    ('cat', OneHotEncoder(), sparse_features)
])
X_prepared = full_pipeline.fit_transform(X)
param_grid = {
    'alpha': [ 0.1],
    'fit_intercept': [False]
}
gs = GridSearchCV(
    estimator=clf,
    param_grid=param_grid,
    scoring='roc_auc',
    n_jobs=-1,
    cv=skf
)
gs.fit(X_prepared, y)

以下是修订版本。

clf2 = RidgeClassifier()
preprocess_pipeline2 = ColumnTransformer([
    ('num', StandardScaler(), dense_features),
    ('cat', OneHotEncoder(), sparse_features)
])
from sklearn.pipeline import Pipeline
final_pipeline = Pipeline(steps=[
    ('p', preprocess_pipeline2),
    ('c', clf2)
])
param_grid2 = {
    'c__alpha': [0.4, 0.1],
    'c__fit_intercept': [False]
}
gs2 = GridSearchCV(
    estimator=final_pipeline,
    param_grid=param_grid2,
    scoring='roc_auc',
    n_jobs=-1,
    cv=skf
)
gs2.fit(X, y)

有谁能指出哪里出了问题吗?

编辑:将error_score设置为raise后,我可以收到更多关于问题的反馈。对我来说,似乎需要在合并的数据集上拟合单热编码器,该数据集结合了训练集和测试集。我说得对吗?但是,如果是这种情况,为什么第一个版本不会报同样的错误呢?顺便问一下,引入handle_unknown='ignore'参数处理这个问题有意义吗?

ValueError
---------------------------------------------------------------------------
_RemoteTraceback                          Traceback (most recent call last)
_RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
    r = call_item()
  File "/opt/conda/lib/python3.7/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
    return self.fn(*self.args, **self.kwargs)
  File "/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 595, in __call__
    return self.func(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 263, in __call__
    for func, args, kwargs in self.items]
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 263, in <listcomp>
    for func, args, kwargs in self.items]
  File "/opt/conda/lib/python3.7/site-packages/sklearn/utils/fixes.py", line 222, in __call__
    return self.function(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 620, in _fit_and_score
    test_scores = _score(estimator, X_test, y_test, scorer, error_score)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 674, in _score
    scores = scorer(estimator, X_test, y_test)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_scorer.py", line 200, in __call__
    sample_weight=sample_weight)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_scorer.py", line 334, in _score
    y_pred = method_caller(clf, "decision_function", X)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_scorer.py", line 53, in _cached_call
    return getattr(estimator, method)(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/utils/metaestimators.py", line 120, in <lambda>
    out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py", line 493, in decision_function
    Xt = transform.transform(Xt)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/compose/_column_transformer.py", line 565, in transform
    Xs = self._fit_transform(X, None, _transform_one, fitted=True)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/compose/_column_transformer.py", line 444, in _fit_transform
    self._iter(fitted=fitted, replace_strings=True), 1))
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 1044, in __call__
    while self.dispatch_one_batch(iterator):
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 859, in dispatch_one_batch
    self._dispatch(tasks)
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 777, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 208, in apply_async
    result = ImmediateResult(func)
  File "/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 572, in __init__
    self.results = batch()
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 263, in __call__
    for func, args, kwargs in self.items]
  File "/opt/conda/lib/python3.7/site-packages/joblib/parallel.py", line 263, in <listcomp>
    for func, args, kwargs in self.items]
  File "/opt/conda/lib/python3.7/site-packages/sklearn/utils/fixes.py", line 222, in __call__
    return self.function(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py", line 733, in _transform_one
    res = transformer.transform(X)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py", line 462, in transform
    force_all_finite='allow-nan')
  File "/opt/conda/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py", line 136, in _transform
    raise ValueError(msg)
ValueError: Found unknown categories ['MR', 'MW', 'DA'] in column 10 during transform
"""

The above exception was the direct cause of the following exception:

ValueError                                Traceback (most recent call last)
<ipython-input-48-b81f3b7b0724> in <module>
     21     cv=skf
     22 )
---> 23 gs2.fit(X, y)

/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
     61             extra_args = len(args) - len(all_args)
     62             if extra_args <= 0:
---> 63                 return f(*args, **kwargs)
     64 
     65             # extra_args > 0

/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
    839                 return results
    840 
--> 841             self._run_search(evaluate_candidates)
    842 
    843             # multimetric is determined here because in the case of a callable

/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_search.py in _run_search(self, evaluate_candidates)
   1286     def _run_search(self, evaluate_candidates):
   1287         """Search all candidates in param_grid"""
-> 1288         evaluate_candidates(ParameterGrid(self.param_grid))
   1289 
   1290 

/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_search.py in evaluate_candidates(candidate_params, cv, more_results)
    807                                    (split_idx, (train, test)) in product(
    808                                    enumerate(candidate_params),
--> 809                                    enumerate(cv.split(X, y, groups))))
    810 
    811                 if len(out) < 1:

/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in __call__(self, iterable)
   1052 
   1053             with self._backend.retrieval_context():
-> 1054                 self.retrieve()
   1055             # Make sure that we get a last message telling us we are done
   1056             elapsed_time = time.time() - self._start_time

/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in retrieve(self)
    931             try:
    932                 if getattr(self._backend, 'supports_timeout', False):
--> 933                     self._output.extend(job.get(timeout=self.timeout))
    934                 else:
    935                     self._output.extend(job.get())

/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
    540         AsyncResults.get from multiprocessing."""
    541         try:
--> 542             return future.result(timeout=timeout)
    543         except CfTimeoutError as e:
    544             raise TimeoutError from e

/opt/conda/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
    433                 raise CancelledError()
    434             elif self._state == FINISHED:
--> 435                 return self.__get_result()
    436             else:
    437                 raise TimeoutError()

/opt/conda/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
    382     def __get_result(self):
    383         if self._exception:
--> 384             raise self._exception
    385         else:
    386             return self._result

ValueError: Found unknown categories ['MR', 'MW', 'DA'] in column 10 during transform

1
警告弹出的确切位置在哪里?请发布完整的跟踪。 - desertnaut
4个回答

6

首先,我想说我曾经遇到过类似的问题,感谢您的提示。

将error_score设置为raise之后

这真正帮助了我解决我的问题。 我正在使用自定义转换器,并且有一些代码在训练集中创建变量,然后在验证集中没有创建它们,因为这些类别在验证集中不存在。 我认为你也遇到了类似的问题。

看起来OneHotEncoder可能会在训练集中创建一些类别,然后在验证集中找到一些未知于其的新类别,因为这些类别在训练集中不存在。

ValueError: Found unknown categories ['MR', 'MW', 'DA'] in column 10 during transform

为了解决这个问题,我的建议是探索使用自定义转换器,因为您的数据更加复杂。 https://towardsdatascience.com/custom-transformers-and-ml-data-pipelines-with-python-20ea2a7adb65


3

如果是多分类问题,请删除 roc_auc 评估指标。因为它们不能很好地协作。使用默认的评估指标或选择其他评估指标。


2
什么评分应该被使用? - Maths12
1
同上,应该使用什么? - John Thomas

0

你的回答可以通过提供更多支持信息来改进。请编辑以添加进一步的细节,例如引用或文档,以便他人可以确认你的答案是正确的。您可以在帮助中心中找到有关如何编写良好答案的更多信息。 - Community
虽然这个链接可能回答了问题,但最好在此处包含答案的基本部分并提供参考链接。如果链接页面更改,仅有链接的答案可能会失效。-【来自审查】 - Emi OB

0

我也遇到了类似的问题,我有一个建议或许能帮到你。不要在你的流水线中包含那些不需要参数调整的转换器,而是只传递那些你想要优化参数的对象,在使用GridSearchCV时会更好。

在你的情况下,似乎不太可能需要调整OneHotEncoder的参数。因此,我的建议是首先将你的流水线的每个步骤分别应用于数据集中,并获得处理后的数据集。

然后,你可以安全地使用GridSearchCV通过指定相关参数来调整模型。你可以使用best_params_方法检索最佳参数,然后将它们集成回你的流水线中并继续进行你打算做的事情。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接