从Spark DataFrame中删除嵌套列

35

我有一个带有模式的DataFrame

root
 |-- label: string (nullable = true)
 |-- features: struct (nullable = true)
 |    |-- feat1: string (nullable = true)
 |    |-- feat2: string (nullable = true)
 |    |-- feat3: string (nullable = true)

虽然我能够使用筛选器筛选数据框

  val data = rawData
     .filter( !(rawData("features.feat1") <=> "100") )

我无法使用

drop

命令删除列。

  val data = rawData
       .drop("features.feat1")

我在这里做错了什么吗?我还尝试过(不成功地)执行 drop(rawData("features.feat1")),尽管这样做没有太多意义。

提前致谢,

Nikhil


如果您将其映射到新的数据框中会怎样呢?我认为DataFrame API不允许您删除结构列类型中的结构字段。 - Jeff L
哦,我会尝试一下,但如果我必须这样映射才能解决嵌套列名,似乎相当不方便 :(. - Nikhil J Joshi
您可以使用DataFrame的.columns()方法获取所有列,从序列中删除不需要的列,然后执行select(myColumns:_*)。这样会更简洁一些。 - TheMP
11个回答

29

这只是一个编程练习,但你可以尝试类似以下的内容:

import org.apache.spark.sql.{DataFrame, Column}
import org.apache.spark.sql.types.{StructType, StructField}
import org.apache.spark.sql.{functions => f}
import scala.util.Try

case class DFWithDropFrom(df: DataFrame) {
  def getSourceField(source: String): Try[StructField] = {
    Try(df.schema.fields.filter(_.name == source).head)
  }

  def getType(sourceField: StructField): Try[StructType] = {
    Try(sourceField.dataType.asInstanceOf[StructType])
  }

  def genOutputCol(names: Array[String], source: String): Column = {
    f.struct(names.map(x => f.col(source).getItem(x).alias(x)): _*)
  }

  def dropFrom(source: String, toDrop: Array[String]): DataFrame = {
    getSourceField(source)
      .flatMap(getType)
      .map(_.fieldNames.diff(toDrop))
      .map(genOutputCol(_, source))
      .map(df.withColumn(source, _))
      .getOrElse(df)
  }
}

使用示例:

scala> case class features(feat1: String, feat2: String, feat3: String)
defined class features

scala> case class record(label: String, features: features)
defined class record

scala> val df = sc.parallelize(Seq(record("a_label",  features("f1", "f2", "f3")))).toDF
df: org.apache.spark.sql.DataFrame = [label: string, features: struct<feat1:string,feat2:string,feat3:string>]

scala> DFWithDropFrom(df).dropFrom("features", Array("feat1")).show
+-------+--------+
|  label|features|
+-------+--------+
|a_label| [f2,f3]|
+-------+--------+


scala> DFWithDropFrom(df).dropFrom("foobar", Array("feat1")).show
+-------+----------+
|  label|  features|
+-------+----------+
|a_label|[f1,f2,f3]|
+-------+----------+


scala> DFWithDropFrom(df).dropFrom("features", Array("foobar")).show
+-------+----------+
|  label|  features|
+-------+----------+
|a_label|[f1,f2,f3]|
+-------+----------+

添加一个隐式类型转换,然后你就可以愉快地使用了。


尽管我已经点赞了这个答案,因为它在所有嵌套行具有统一模式时可以工作 - 否则它就不起作用 - 它只会返回原始的DataFrame。 - smishra
看起来问题出在语句的getOrElse部分,如果有任何异常抛出,它不会被打印出来,'Else'部分接管并返回原始DataFrame。例如,在我的情况下,大小写不敏感是问题所在——有两列具有相同的名称但大小写不同。 - smishra

25

这个版本允许您在任何级别上删除嵌套的列:

import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.{StructType, DataType}

/**
  * Various Spark utilities and extensions of DataFrame
  */
object DataFrameUtils {

  private def dropSubColumn(col: Column, colType: DataType, fullColName: String, dropColName: String): Option[Column] = {
    if (fullColName.equals(dropColName)) {
      None
    } else {
      colType match {
        case colType: StructType =>
          if (dropColName.startsWith(s"${fullColName}.")) {
            Some(struct(
              colType.fields
                .flatMap(f =>
                  dropSubColumn(col.getField(f.name), f.dataType, s"${fullColName}.${f.name}", dropColName) match {
                    case Some(x) => Some(x.alias(f.name))
                    case None => None
                  })
                : _*))
          } else {
            Some(col)
          }
        case other => Some(col)
      }
    }
  }

  protected def dropColumn(df: DataFrame, colName: String): DataFrame = {
    df.schema.fields
      .flatMap(f => {
        if (colName.startsWith(s"${f.name}.")) {
          dropSubColumn(col(f.name), f.dataType, f.name, colName) match {
            case Some(x) => Some((f.name, x))
            case None => None
          }
        } else {
          None
        }
      })
      .foldLeft(df.drop(colName)) {
        case (df, (colName, column)) => df.withColumn(colName, column)
      }
  }

  /**
    * Extended version of DataFrame that allows to operate on nested fields
    */
  implicit class ExtendedDataFrame(df: DataFrame) extends Serializable {
    /**
      * Drops nested field from DataFrame
      *
      * @param colName Dot-separated nested field name
      */
    def dropNestedColumn(colName: String): DataFrame = {
      DataFrameUtils.dropColumn(df, colName)
    }
  }
}

用法:

import DataFrameUtils._
df.dropNestedColumn("a.b.c.d")

1
您IP地址为143.198.54.68,由于运营成本限制,当前对于免费用户的使用频率限制为每个IP每72小时10次对话,如需解除限制,请点击左下角设置图标按钮(手机用户先点击左上角菜单按钮)。 - alexP_Keaton
1
@alexP_Keaton 你好,你解决了如何在数组中删除列的问题吗? - V. Samma
我想补充一下,这种方法不会保留修改后的父结构体的“nullable”属性。在此示例中,“features”将变为“struct(nullable = false)”。 - Michel Lemay
1
我发现解决这个问题的一种方法是将 f.nullable 传递给 dropSubColumn,并在 struct(... :_*) 的结果上使用 when(col.isNotNull, newCol) 构造。 - Michel Lemay
1
这里实际上不需要 df.drop(colName)。它无论如何都不起作用(否则我们可以直接使用API删除嵌套列)。此外,如果给定了 withColumn,它将替换现有列名的新定义。 - Jeff Evans
有人有类似的解决方案来添加一个可选的(可为空的)列在一个深度嵌套的结构字段中吗? - Varun Taliyan

6
对于 Spark 3.1+,可以在结构类型的列上使用方法 dropFields(fieldNames:String*):dropFields 引用如下描述:
一个通过名称删除 StructType 中字段的表达式。 如果模式不包含字段名,则这是无操作。
val df = sql("SELECT named_struct('feat1', 1, 'feat2', 2, 'feat3', 3) features")

val df1 = df.withColumn("features", $"features".dropFields("feat1"))

1
当需要在由数组包装的结构中删除字段时,代码无法正常工作。 - mixermt

5

在spektom的回答基础上进行扩展。支持数组类型:

object DataFrameUtils {

  private def dropSubColumn(col: Column, colType: DataType, fullColName: String, dropColName: String): Option[Column] = {
    if (fullColName.equals(dropColName)) {
      None
    } else if (dropColName.startsWith(s"$fullColName.")) {
      colType match {
        case colType: StructType =>
          Some(struct(
            colType.fields
              .flatMap(f =>
                dropSubColumn(col.getField(f.name), f.dataType, s"$fullColName.${f.name}", dropColName) match {
                  case Some(x) => Some(x.alias(f.name))
                  case None => None
                })
              : _*))
        case colType: ArrayType =>
          colType.elementType match {
            case innerType: StructType =>
              Some(struct(innerType.fields
                .flatMap(f =>
                  dropSubColumn(col.getField(f.name), f.dataType, s"$fullColName.${f.name}", dropColName) match {
                    case Some(x) => Some(x.alias(f.name))
                    case None => None
                  })
                : _*))
          }

        case other => Some(col)
      }
    } else {
      Some(col)
    }
  }

  protected def dropColumn(df: DataFrame, colName: String): DataFrame = {
    df.schema.fields
      .flatMap(f => {
        if (colName.startsWith(s"${f.name}.")) {
          dropSubColumn(col(f.name), f.dataType, f.name, colName) match {
            case Some(x) => Some((f.name, x))
            case None => None
          }
        } else {
          None
        }
      })
      .foldLeft(df.drop(colName)) {
        case (df, (colName, column)) => df.withColumn(colName, column)
      }
  }

  /**
    * Extended version of DataFrame that allows to operate on nested fields
    */
  implicit class ExtendedDataFrame(df: DataFrame) extends Serializable {
    /**
      * Drops nested field from DataFrame
      *
      * @param colName Dot-separated nested field name
      */
    def dropNestedColumn(colName: String): DataFrame = {
      DataFrameUtils.dropColumn(df, colName)
    }
  }

}

case colType: ArrayType中调用struct需要被包裹在array中,否则您将“丢失”列的父级数组包装。此外,最低结构中剩余的项目(即其中一个成员被删除的结构)由于某种原因会被转换为array,我仍在进行调试。 - Jeff Evans
@JeffEvans 你找到那个字段被转换成数组的原因了吗?我调试了一整天,还是不明白Spark在那里做了什么。可能是一个bug吗? - Doru Chiulan
不确定。我们正在使用的代码可能不正确,但我还不能完全理解Spark库代码的内部结构,所以无法确定。 - Jeff Evans
正如@JeffEvans所提到的,这里存在一个错误,会将剩余的项目转换为数组。为了避免这种情况,你应该使用以下代码:在ArrayType案例中,case Some(x) => Some(x.getItem(0).alias(f.name))而不是case Some(x) => Some(x.alias(f.name)) - Mahnaz
ArrayType -> StructType的情况下,在alias内部之前执行.getItem(0)确实“有效”。但是,这真的不应该是必需的。我相信Spark本身存在一个错误,因此我已经为此打开了Jira:https://issues.apache.org/jira/browse/SPARK-31779 - Jeff Evans
看起来Spark的维护者不认为这是一个bug。尽管如此,鉴于他们在我打开的Jira上的评论,我可以确认有一个解决方法。将其粘贴为单独的答案。- Jeff Evans 6小时前 删除 - Jeff Evans

5
我将扩展mmendez.semantic在此处提供的答案并考虑子线程中描述的问题。
  def dropSubColumn(col: Column, colType: DataType, fullColName: String, dropColName: String): Option[Column] = {
    if (fullColName.equals(dropColName)) {
      None
    } else if (dropColName.startsWith(s"$fullColName.")) {
      colType match {
        case colType: StructType =>
          Some(struct(
            colType.fields
                .flatMap(f =>
                  dropSubColumn(col.getField(f.name), f.dataType, s"$fullColName.${f.name}", dropColName) match {
                    case Some(x) => Some(x.alias(f.name))
                    case None => None
                  })
                : _*))
        case colType: ArrayType =>
          colType.elementType match {
            case innerType: StructType =>
              // we are potentially dropping a column from within a struct, that is itself inside an array
              // Spark has some very strange behavior in this case, which they insist is not a bug
              // see https://issues.apache.org/jira/browse/SPARK-31779 and associated comments
              // and also the thread here: https://dev59.com/slwY5IYBdhLWcg3wWWm0#39943812
              // this is a workaround for that behavior

              // first, get all struct fields
              val innerFields = innerType.fields
              // next, create a new type for all the struct fields EXCEPT the column that is to be dropped
              // we will need this later
              val preserveNamesStruct = ArrayType(StructType(
                innerFields.filterNot(f => s"$fullColName.${f.name}".equals(dropColName))
              ))
              // next, apply dropSubColumn recursively to build up the new values after dropping the column
              val filteredInnerFields = innerFields.flatMap(f =>
                dropSubColumn(col.getField(f.name), f.dataType, s"$fullColName.${f.name}", dropColName) match {
                    case Some(x) => Some(x.alias(f.name))
                    case None => None
                }
              )
              // finally, use arrays_zip to unwrap the arrays that were introduced by building up the new. filtered
              // struct in this way (see comments in SPARK-31779), and then cast to the StructType we created earlier
              // to get the original names back
              Some(arrays_zip(filteredInnerFields:_*).cast(preserveNamesStruct))
          }

        case _ => Some(col)
      }
    } else {
      Some(col)
    }
  }

  def dropColumn(df: DataFrame, colName: String): DataFrame = {
    df.schema.fields.flatMap(f => {
      if (colName.startsWith(s"${f.name}.")) {
        dropSubColumn(col(f.name), f.dataType, f.name, colName) match {
          case Some(x) => Some((f.name, x))
          case None => None
        }
      } else {
        None
      }
    }).foldLeft(df.drop(colName)) {
      case (df, (colName, column)) => df.withColumn(colName, column)
    }
  }

spark-shell的用法:

// if defining the functions above in your spark-shell session, you first need imports
import org.apache.spark.sql._
import org.apache.spark.sql.types._

// now you can paste the function definitions

// create a deeply nested and complex JSON structure    
val jsonData = """{
      "foo": "bar",
      "top": {
        "child1": 5,
        "child2": [
          {
            "child2First": "one",
            "child2Second": 2,
            "child2Third": -19.51
          }
        ],
        "child3": ["foo", "bar", "baz"],
        "child4": [
          {
            "child2First": "two",
            "child2Second": 3,
            "child2Third": 16.78
          }
        ]
      }
    }"""

// read it into a DataFrame
val df = spark.read.option("multiline", "true").json(Seq(jsonData).toDS())

// remove a sub-column
val modifiedDf = dropColumn(df, "top.child2.child2First")

modifiedDf.printSchema
root
 |-- foo: string (nullable = true)
 |-- top: struct (nullable = false)
 |    |-- child1: long (nullable = true)
 |    |-- child2: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- child2Second: long (nullable = true)
 |    |    |    |-- child2Third: double (nullable = true)
 |    |-- child3: array (nullable = true)
 |    |    |-- element: string (containsNull = true)
 |    |-- child4: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- child2First: string (nullable = true)
 |    |    |    |-- child2Second: long (nullable = true)
 |    |    |    |-- child2Third: double (nullable = true)


modifiedDf.show(truncate=false)
+---+------------------------------------------------------+
|foo|top                                                   |
+---+------------------------------------------------------+
|bar|[5, [[2, -19.51]], [foo, bar, baz], [[two, 3, 16.78]]]|
+---+------------------------------------------------------+

我看到我们在 "dropSubColumn" 的 def 调用中使用了 "arrays_zip",但是从 Spark 2.3 开始,这将不起作用,有没有其他的替代方法。我尝试创建一个 UDF 如下,但在使用相同的方法时没有成功。 val zipped = udf((s: Seq[String],t: Seq[String]) => s zip t) - Sampat Kumar
据我所记,这在2.4版本中是可行的。但是我现在无法访问底层项目代码,所以我不确定。应该很容易使用spark-shell进行测试。 - Jeff Evans

4

另一种(PySpark)方法是通过重新创建features来删除features.feat1列:

from pyspark.sql.functions import col, arrays_zip

display(df
        .withColumn("features", arrays_zip("features.feat2", "features.feat3"))
        .withColumn("features", col("features").cast(schema))
)

其中schema表示新模式(不包括features.feat1)。

from pyspark.sql.types import StructType, StructField, StringType

schema = StructType(
    [
      StructField('feat2', StringType(), True), 
      StructField('feat3', StringType(), True), 
    ]
  )

屡获殊荣的答案。不知道arrays_zip()的存在。 - Patrick McGloin

2

PySpark实现

import pyspark.sql.functions as sf

def _drop_nested_field(
    schema: StructType,
    field_to_drop: str,
    parents: List[str] = None,
) -> Column:
    parents = list() if parents is None else parents
    src_col = lambda field_names: sf.col('.'.join(f'`{c}`' for c in field_names))

    if '.' in field_to_drop:
        root, subfield = field_to_drop.split('.', maxsplit=1)
        field_to_drop_from = next(f for f in schema.fields if f.name == root)

        return sf.struct(
            *[src_col(parents + [f.name]) for f in schema.fields if f.name != root],
            _drop_nested_field(
                schema=field_to_drop_from.dataType,
                field_to_drop=subfield,
                parents=parents + [root]
            ).alias(root)
        )

    else:
        # select all columns except the one to drop
        return sf.struct(
            *[src_col(parents + [f.name])for f in schema.fields if f.name != field_to_drop],
        )


def drop_nested_field(
    df: DataFrame,
    field_to_drop: str,
) -> DataFrame:
    if '.' in field_to_drop:
        root, subfield = field_to_drop.split('.', maxsplit=1)
        field_to_drop_from = next(f for f in df.schema.fields if f.name == root)

        return df.withColumn(root, _drop_nested_field(
            schema=field_to_drop_from.dataType,
            field_to_drop=subfield,
            parents=[root]
        ))
    else:
        return df.drop(field_to_drop)


df = drop_nested_field(df, 'a.b.c.d')

1
_drop_nested_field中的sf是什么? - prakharjain
@prakharjain import pyspark.sql.functions as sf 我已经更新了示例。 - M.Vanderlee

2

根据spektom的scala代码片段,我创建了一个类似的Java代码。 由于Java 8没有foldLeft函数,所以我使用了forEachOrdered。这段代码适用于spark 2.x(我正在使用2.1)。 另外我注意到,删除一列并使用相同名称的withColumn添加它是不起作用的,所以我只是替换了该列,似乎它能够工作。

代码尚未完全测试,希望它能工作:-)

public class DataFrameUtils {

public static Dataset<Row> dropNestedColumn(Dataset<Row> dataFrame, String columnName) {
    final DataFrameFolder dataFrameFolder = new DataFrameFolder(dataFrame);
    Arrays.stream(dataFrame.schema().fields())
        .flatMap( f -> {
           if (columnName.startsWith(f.name() + ".")) {
               final Optional<Column> column = dropSubColumn(col(f.name()), f.dataType(), f.name(), columnName);
               if (column.isPresent()) {
                   return Stream.of(new Tuple2<>(f.name(), column));
               } else {
                   return Stream.empty();
               }
           } else {
               return Stream.empty();
           }
        }).forEachOrdered(colTuple -> dataFrameFolder.accept(colTuple));

    return dataFrameFolder.getDF();
}

private static Optional<Column> dropSubColumn(Column col, DataType colType, String fullColumnName, String dropColumnName) {
    Optional<Column> column = Optional.empty();
    if (!fullColumnName.equals(dropColumnName)) {
        if (colType instanceof StructType) {
            if (dropColumnName.startsWith(fullColumnName + ".")) {
                column = Optional.of(struct(getColumns(col, (StructType)colType, fullColumnName, dropColumnName)));
            }
        } else {
            column = Optional.of(col);
        }
    }

    return column;
}

private static Column[] getColumns(Column col, StructType colType, String fullColumnName, String dropColumnName) {
    return Arrays.stream(colType.fields())
        .flatMap(f -> {
                    final Optional<Column> column = dropSubColumn(col.getField(f.name()), f.dataType(),
                            fullColumnName + "." + f.name(), dropColumnName);
                    if (column.isPresent()) {
                        return Stream.of(column.get().alias(f.name()));
                    } else {
                        return Stream.empty();
                    }
                }
        ).toArray(Column[]::new);

}

private static class DataFrameFolder implements Consumer<Tuple2<String, Optional<Column>>> {
    private Dataset<Row> df;

    public DataFrameFolder(Dataset<Row> df) {
        this.df = df;
    }

    public Dataset<Row> getDF() {
        return df;
    }

    @Override
    public void accept(Tuple2<String, Optional<Column>> colTuple) {
        if (!colTuple._2().isPresent()) {
            df = df.drop(colTuple._1());
        } else {
            df = df.withColumn(colTuple._1(), colTuple._2().get());
        }
    }
}

使用示例:

private class Pojo {
    private String str;
    private Integer number;
    private List<String> strList;
    private Pojo2 pojo2;

    public String getStr() {
        return str;
    }

    public Integer getNumber() {
        return number;
    }

    public List<String> getStrList() {
        return strList;
    }

    public Pojo2 getPojo2() {
        return pojo2;
    }

}

private class Pojo2 {
    private String str;
    private Integer number;
    private List<String> strList;

    public String getStr() {
        return str;
    }

    public Integer getNumber() {
        return number;
    }

    public List<String> getStrList() {
        return strList;
    }

}

SQLContext context = new SQLContext(new SparkContext("local[1]", "test"));
Dataset<Row> df = context.createDataFrame(Collections.emptyList(), Pojo.class);
Dataset<Row> dfRes = DataFrameUtils.dropNestedColumn(df, "pojo2.str");

Original struct:

root
 |-- number: integer (nullable = true)
 |-- pojo2: struct (nullable = true)
 |    |-- number: integer (nullable = true)
 |    |-- str: string (nullable = true)
 |    |-- strList: array (nullable = true)
 |    |    |-- element: string (containsNull = true)
 |-- str: string (nullable = true)
 |-- strList: array (nullable = true)
 |    |-- element: string (containsNull = true)

丢弃之后:

root
 |-- number: integer (nullable = true)
 |-- pojo2: struct (nullable = false)
 |    |-- number: integer (nullable = true)
 |    |-- strList: array (nullable = true)
 |    |    |-- element: string (containsNull = true)
 |-- str: string (nullable = true)
 |-- strList: array (nullable = true)
 |    |-- element: string (containsNull = true)

添加一个简单的调用示例,我会给你点赞。 - Panagiotis Drakatos
1
根据 @xXxpRoGrAmmErxXx 的要求,添加了使用示例。 - Lior Chaga

0

使用Spark 3.1+,简洁高效:

object DatasetOps {
  implicit class DatasetOps[T](val dataset: Dataset[T]) {
    def dropFields(fieldNames: String*): DataFrame =
      fieldNames.foldLeft(dataset.toDF()) { (dataset, fieldName) =>
        val subFieldRegex = "(\\w+)\\.(.+)".r
        fieldName match {
          case subFieldRegex(columnName, subFieldPath) =>
            dataset.withColumn(columnName, col(columnName).dropFields(subFieldPath))
          case _ => dataset.drop(fieldName)
        }
      }
  }
}

这也保留了模式中所需或不需要的布尔值。

用法:

dataset.dropFields("some_column", "some_struct.some_sub_field.some_field")

0

为此添加Java版本的解决方案。

实用程序类(将数据集和需要删除的嵌套列传递给dropNestedColumn函数)。

(Lior Chaga的答案中有一些错误,在我尝试使用他的答案时进行了更正)。

public class NestedColumnActions {
/*
dataset : dataset in which we want to drop columns
columnName : nested column that needs to be deleted
*/
public static Dataset<?> dropNestedColumn(Dataset<?> dataset, String columnName) {

    //Special case of top level column deletion
    if(!columnName.contains("."))
        return dataset.drop(columnName);

    final DataSetModifier dataFrameFolder = new DataSetModifier(dataset);
    Arrays.stream(dataset.schema().fields())
            .flatMap(f -> {
                //If the column name to be deleted starts with current top level column
                if (columnName.startsWith(f.name() + DOT)) {
                    //Get new column structure under f , expected after deleting the required column
                    final Optional<Column> column = dropSubColumn(functions.col(f.name()), f.dataType(), f.name(), columnName);
                    if (column.isPresent()) {
                        return Stream.of(new Tuple2<>(f.name(), column));
                    } else {
                        return Stream.empty();
                    }
                } else {
                    return Stream.empty();
                }
            })
            //Call accept function with Tuples of (top level column name, new column structure under it)
            .forEach(colTuple -> dataFrameFolder.accept(colTuple));

    return dataFrameFolder.getDataset();
}

private static Optional<Column> dropSubColumn(Column col, DataType colType, String fullColumnName, String dropColumnName) {
    Optional<Column> column = Optional.empty();
    if (!fullColumnName.equals(dropColumnName)) {
        if (colType instanceof StructType) {
            if (dropColumnName.startsWith(fullColumnName + DOT)) {
                column = Optional.of(functions.struct(getColumns(col, (StructType) colType, fullColumnName, dropColumnName)));
            }
            else {
                column = Optional.of(col);
            }
        } else {
            column = Optional.of(col);
        }
    }

    return column;
}

private static Column[] getColumns(Column col, StructType colType, String fullColumnName, String dropColumnName) {
    return Arrays.stream(colType.fields())
            .flatMap(f -> {
                        final Optional<Column> column = dropSubColumn(col.getField(f.name()), f.dataType(),
                                fullColumnName + "." + f.name(), dropColumnName);
                        if (column.isPresent()) {
                            return Stream.of(column.get().alias(f.name()));
                        } else {
                            return Stream.empty();
                        }
                    }
            ).toArray(Column[]::new);

}

private static class DataSetModifier implements Consumer<Tuple2<String, Optional<Column>>> {
    private Dataset<?> df;

    public DataSetModifier(Dataset<?> df) {
        this.df = df;
    }

    public Dataset<?> getDataset() {
        return df;
    }

    /*
    colTuple[0]:top level column name
    colTuple[1]:new column structure under it
   */
    @Override
    public void accept(Tuple2<String, Optional<Column>> colTuple) {
        if (!colTuple._2().isPresent()) {
            df = df.drop(colTuple._1());
        } else {
            df = df.withColumn(colTuple._1(), colTuple._2().get());
        }
    }
}

}


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接