用PySpark同时替换多列中的值

7

我想将数据帧列中的值替换为另一个值,并且我需要对许多列进行这样的操作(假设是30/100个列)

我已经查看了

from pyspark.sql.functions import when, lit, col

df = sc.parallelize([(1, "foo", "val"), (2, "bar", "baz"), (3, "baz", "buz")]).toDF(["x", "y", "z"])
df.show()

# I can replace "baz" with Null separaely in column y and z
def replace(column, value):
    return when(column != value, column).otherwise(lit(None))

df = df.withColumn("y", replace(col("y"), "baz"))\
    .withColumn("z", replace(col("z"), "baz"))
df.show()    

enter image description here

我可以分别在列y和z中将“baz”替换为Null。但我想对所有列执行此操作 - 类似于以下列表理解方式

[replace(df[col], "baz") for col in df.columns]
3个回答

5

由于大约有30/100列,因此让我们向DataFrame添加几列以更好地进行概括。

# Loading the requisite packages
from pyspark.sql.functions import col, when
df = sc.parallelize([(1,"foo","val","baz","gun","can","baz","buz","oof"), 
                     (2,"bar","baz","baz","baz","got","pet","stu","got"), 
                     (3,"baz","buz","pun","iam","you","omg","sic","baz")]).toDF(["x","y","z","a","b","c","d","e","f"])
df.show()
+---+---+---+---+---+---+---+---+---+ 
|  x|  y|  z|  a|  b|  c|  d|  e|  f| 
+---+---+---+---+---+---+---+---+---+ 
|  1|foo|val|baz|gun|can|baz|buz|oof| 
|  2|bar|baz|baz|baz|got|pet|stu|got| 
|  3|baz|buz|pun|iam|you|omg|sic|baz| 
+---+---+---+---+---+---+---+---+---+

假设我们想要在除列xa之外的所有列中使用列表推导式来选择需要进行替换的那些列,并将baz替换为Null

# This contains the list of columns where we apply replace() function
all_column_names = df.columns
print(all_column_names)
    ['x', 'y', 'z', 'a', 'b', 'c', 'd', 'e', 'f']
columns_to_remove = ['x','a']
columns_for_replacement = [i for i in all_column_names if i not in columns_to_remove]
print(columns_for_replacement)
    ['y', 'z', 'b', 'c', 'd', 'e', 'f']

最后,使用when()进行替换,它实际上是if语句的一个别名。
# Doing the replacement on all the requisite columns
for i in columns_for_replacement:
    df = df.withColumn(i,when((col(i)=='baz'),None).otherwise(col(i)))
df.show()
+---+----+----+---+----+---+----+---+----+ 
|  x|   y|   z|  a|   b|  c|   d|  e|   f| 
+---+----+----+---+----+---+----+---+----+ 
|  1| foo| val|baz| gun|can|null|buz| oof| 
|  2| bar|null|baz|null|got| pet|stu| got| 
|  3|null| buz|pun| iam|you| omg|sic|null| 
+---+----+----+---+----+---+----+---+----+

不需要创建一个UDF并定义函数来替换文本,如果可以使用正常的if-else子句完成替换。UDF通常是一项昂贵的操作,应尽可能避免使用。

2
使用reduce()函数:
from functools import reduce

reduce(lambda d, c: d.withColumn(c, replace(col(c), "baz")), [df, 'y', 'z']).show()
#+---+----+----+
#|  x|   y|   z|
#+---+----+----+
#|  1| foo| val|
#|  2| bar|null|
#|  3|null| buz|
#+---+----+----+

1
您可以使用select和列表推导式:

df = df.select([replace(f.col(column), 'baz').alias(column) if column!='x' else f.col(column)
                for column in df.columns])
df.show()

你从哪里引入了这个替换函数? - Topde
把翻译的文本放在这里。 - ags29
啊,好的,SQL API中有一个替换函数,真不敢相信它还没有被移植到Scala/Python Spark。 - Topde

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接