Spark聚合函数按条件统计

3

我将尝试对数据框进行分组,然后在聚合行时,使用计数器对行应用条件。

这里是一个示例:

val test=Seq(("A","X"),("A","X"),("B","O"),("B","O"),("c","O"),("c","X"),("d","X"),("d","O")).toDF
test.show
+---+---+
| _1| _2|
+---+---+
|  A|  X|
|  A|  X|
|  B|  O|
|  B|  O|
|  c|  O|
|  c|  X|
|  d|  X|
|  d|  O|
+---+---+

在这个例子中,我想按列 _1 进行分组,在列 _2 上计数,当值='X'时
以下是期望的结果:

+---+-----------+
| _1| count(_2) |
+---+-----------+
|  A|  2        |
|  B|  0        |
|  c|  1        |
|  d|  1        |
+---+-----------+
3个回答

5

使用when来进行聚合。这里展示了一个PySpark解决方案。

from pyspark.sql.functions import when,count
test.groupBy(col("col_1")).agg(count(when(col("col_2") == 'X',1))).show()

4
import spark.implicits._

val test=Seq(("A","X"),("A","X"),("B","O"),("B","O"),("c","O"),("c","X"),("d","X"),("d","O")).toDF

test.groupBy("_1").agg(count(when($"_2"==="X", 1)).as("count")).orderBy("_1").show
+---+-----+
| _1|count|
+---+-----+
|  A|    2|
|  B|    0|
|  c|    1|
|  d|    1|
+---+-----+

-1

作为另一种选择,在Scala中可以这样写:

val counter1 = test.select( col("_1"), 
      when(col("_2") === lit("X"), lit(1)).otherwise(lit(0)).as("_2"))

val agg1 = counter1.groupBy("_1").agg(sum("_2")).orderBy("_1")

agg1.show

给出结果:

+---+-------+
| _1|sum(_2)|
+---+-------+
|  A|      2|
|  B|      0|
|  c|      1|
|  d|      1|
+---+-------+

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接