如何在pyspark dataframe中计算每列独特元素的数量:
import pandas as pd
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = pd.DataFrame([[1, 100], [1, 200], [2, 300], [3, 100], [4, 100], [4, 300]], columns=['col1', 'col2'])
df_spark = spark.createDataFrame(df)
print(df_spark.show())
# +----+----+
# |col1|col2|
# +----+----+
# | 1| 100|
# | 1| 200|
# | 2| 300|
# | 3| 100|
# | 4| 100|
# | 4| 300|
# +----+----+
# Some transformations on df_spark here
# How to get a number of unique elements (just a number) in each columns?
我只知道下面这个解决方案,但它非常慢,这两行代码用的时间一样:
col1_num_unique = df_spark.select('col1').distinct().count()
col2_num_unique = df_spark.select('col2').distinct().count()
df_spark
中大约有1000万行。
,但您应该记住这是一个昂贵的操作,并考虑是否适合使用[
pyspark.sql.functions.approxCountDistinct()`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.approxCountDistinct)。 - pault