如何在Spark中检查两个DataFrame列的交集

7

使用pysparksparkr(最好两者都用),如何获取两个DataFrame列的交集?例如,在sparkr中,我有以下DataFrames

newHires <- data.frame(name = c("Thomas", "George", "George", "John"),
                       surname = c("Smith", "Williams", "Brown", "Taylor"))
salesTeam <- data.frame(name = c("Lucas", "Bill", "George"),
                        surname = c("Martin", "Clark", "Williams"))
newHiresDF <- createDataFrame(newHires)
salesTeamDF <- createDataFrame(salesTeam)

#Intersect works for the entire DataFrames
newSalesHire <- intersect(newHiresDF, salesTeamDF)
head(newSalesHire)

        name  surname
    1 George Williams

#Intersect does not work for single columns
newSalesHire <- intersect(newHiresDF$name, salesTeamDF$name)
head(newSalesHire)

as.vector(y) 出错:无法将此 S4 类型强制转换为向量

如何让 intersect 在单列中起作用?


1
在Pyspark中运行良好: spark.createDataFrame(["a","b","x"],StringType()).intersect(spark.createDataFrame(["z","y","x"],StringType())) - rogue-one
1个回答

15

要使用intersect函数,您需要两个Spark DataFrames。您可以使用select函数从每个DataFrame中获取特定列。

在SparkR中:

newSalesHire <- intersect(select(newHiresDF, 'name'), select(salesTeamDF,'name'))

在Pyspark中:

newSalesHire = newHiresDF.select('name').intersect(salesTeamDF.select('name')) 

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接