PySpark - 在数据框中求和一列并将结果返回为 int

2022-01-09 00:00:00 python dataframe pyspark sum

问题描述

我有一个带有一列数字的 pyspark 数据框.我需要对该列求和,然后将结果返回为 python 变量中的 int.

I have a pyspark dataframe with a column of numbers. I need to sum that column and then have the result return as an int in a python variable.

df = spark.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "Number"])

我执行以下操作来对列求和.

I do the following to sum the column.

df.groupBy().sum()

但我得到了一个数据框.

But I get a dataframe back.

+-----------+
|sum(Number)|
+-----------+
|        130|
+-----------+

我会将 130 作为存储在变量中的 int 返回,以便在程序中的其他位置使用.

I would 130 returned as an int stored in a variable to be used else where in the program.

result = 130


解决方案

最简单的方法真的:

df.groupBy().sum().collect()

但是操作很慢:避免groupByKey,你应该使用RDD和reduceByKey:

But it is very slow operation: Avoid groupByKey, you should use RDD and reduceByKey:

df.rdd.map(lambda x: (1,x[1])).reduceByKey(lambda x,y: x + y).collect()[0][1]

我尝试了更大的数据集并测量了处理时间:

I tried on a bigger dataset and i measured the processing time:

RDD 和 ReduceByKey:2.23 秒

RDD and ReduceByKey : 2.23 s

GroupByKey:30.5 秒

GroupByKey: 30.5 s

相关文章