Sklearn模型无法对Pyspark数据框进行预测。

3

我已成功加载了sklearn模型,但无法对pyspark数据框进行预测。在运行下面给出的代码时,会出现以下错误。请帮助我找到用sklearn模型对pyspark进行预测的代码。我已经搜索相关问题,但没有找到解决方案。

sc = spark.sparkContext
braodcast_model = sc.broadcast(loaded_model)
braodcast_model.value


#update prediction method
def predictor(cols):
    #call predict method for model
    return model.value.predict(*cols)

udf_predictor = udf(predictor, FloatType())

#apply the udf to dataframe
df_prediction = df.withColumn("prediction", udf_predictor(df.select(list_of_columns)))

我收到以下错误信息。
TypeError: Invalid argument, not a string or column. For column literals, use 'lit', 'array',
'struct' or 'create_map' function.
1个回答

6

我认为你已经朝着达到预期输出的正确方向努力了。


针对这个问题,我找到了两个可能的解决方案:一个是使用Spark UDF,另一个是使用Pandas UDF


Spark UDF

from pyspark.sql.functions import udf

@udf('integer')
def predict_udf(*cols):
    return int(braodcast_model.value.predict((cols,)))

list_of_columns = df.columns
df_prediction = df.withColumn('prediction', predict_udf(*list_of_columns))

数据处理函数(Pandas UDF)

import pandas as pd
from pyspark.sql.functions import pandas_udf

@pandas_udf('integer')
def predict_pandas_udf(*cols):
    X = pd.concat(cols, axis=1)
    return pd.Series(braodcast_model.value.predict(X))

list_of_columns = df.columns
df_prediction = df.withColumn('prediction', predict_pandas_udf(*list_of_columns))

可重现的例子

在这里,我使用了一个带有 Spark 3.1.2pandas==1.2.4pyarrow==4.0.0 的 Databricks 社区集群。
broadcasted_model 是来自 scikit-learn 的简单逻辑回归模型,在乳腺癌数据集上进行训练。

import pandas as pd
import joblib
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from pyspark.sql.functions import udf, pandas_udf


# load dataset
X, y = load_breast_cancer(return_X_y=True, as_frame=True)

# split in training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=28)

# create a small pipeline with standardization and model
pipe = make_pipeline(StandardScaler(), LogisticRegression())

# save and reload the model
path = '/databricks/driver/test_model.joblib'
joblib.dump(model, path)
loaded_model = joblib.load(path)

# sample of unseen data
df = spark.createDataFrame(X_test.sample(50, random_state=42))

# create broadcasted model
sc = spark.sparkContext
braodcast_model = sc.broadcast(loaded_model)

我使用了上述两种方法,你会发现无论哪种方法,输出的df_prediction都是相同的。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接