火花数据集分组和总和
我使用 Spark 1.6.1 和 Java 作为编程语言.以下代码在 dataframes 上运行良好:
I am using Spark 1.6.1 and Java as programming language. The following code was working fine with dataframes:
simpleProf.groupBy(col("col1"), col("col2") )
.agg(
sum("CURRENT_MONTH"),
sum("PREVIOUS_MONTH")
);
但是,它不使用 数据集,知道如何在 Java/Spark 中对数据集执行相同的操作吗?
But, it does not using datasets, any idea how to do the same with dataset in Java/Spark?
干杯
推荐答案
它不起作用,因为在 groupBy 之后我得到一个 GroupedDataset 对象,当我尝试应用函数 agg 时,它需要 typedColumn 而不是 column.
It does not work, in the sense that after the groupBy I get a GroupedDataset object and when I try to apply the function agg it requires typedColumn instead of column.
啊,由于 Spark 2.x 中 Dataset 和 DataFrame 的合并,这有点混乱,其中有一个与关系列一起使用的 groupBy
和 groupByKey
适用于类型化的列.因此,假设您在 1.6 中使用显式数据集,那么解决方案是通过 .as
方法对列进行典型化.
Ahh, there was just some confusion on this because of the merging of Dataset and DataFrame in Spark 2.x, where there is a groupBy
which works with relational columns, and groupByKey
which works with typed columns. So, given that you are using an explicit Dataset in 1.6, then the solution is to typify your columns via the .as
method.
sum("CURRENT_MONTH").as[Int]
相关文章