使用DASK导入大型CSV文件

2022-04-14 00:00:00 python dataframe dask dask-dataframe vaex

问题描述

我正在使用Dask680 GB导入一个非常大的CSV文件,然而,输出并不是我所期望的。我的目标是只选择一些列(6/50),并可能过滤它们(这一点我不确定,因为似乎没有数据?):

import dask.dataframe as dd

file_path = "/Volumes/Seagate/Work/Tickets/Third ticket/Extinction/species_all.csv"

cols = ['year', 'species', 'occurrenceStatus', 'individualCount', 'decimalLongitude', 'decimalLatitde']
dataset = dd.read_csv(file_path, names=cols,usecols=[9, 18, 19, 21, 22, 32])

当我将其读入Jupyter时,我无法理解输出-控制台输出:

Dask DataFrame Structure:
                     year species occurrenceStatus individualCount decimalLongitude decimalLatitde
npartitions=11397                                                                                 
                   object  object           object          object           object         object
                      ...     ...              ...             ...              ...            ...
...                   ...     ...              ...             ...              ...            ...
                      ...     ...              ...             ...              ...            ...
                      ...     ...              ...             ...              ...            ...
Dask Name: read-csv, 11397 tasks

解决方案

您似乎已成功创建了DASK数据帧。如果您期待的是像 pandas 数据帧这样的内容,那么您可以使用dataset.head()来查看数据。对于更复杂的计算,最好让DataSet保持惰性(作为DaskDataFrame),并对所有转换使用标准pandas语法。

# this is needed to call dask.compute
import dask

# for example take a subset
subset_data = dataset[dataset['year']>2000]

# find out the total value for this column
lazy_result = subset_data['individualCount'].sum()

# now that the target is known use .compute
computed_result = dask.compute(lazy_result)

除了DASK,您还可以查看vaex,出于某些目的,它可能更好:https://vaex.io/

相关文章