PYTHON阿帕奇光束多路输出和处理

2022-04-13 00:00:00 python apache-beam

问题描述

我正在尝试使用以下流程在Google数据流上运行作业:

实质上是获取单个数据源,根据词典中的某些值进行筛选,并为每个筛选条件创建单独的输出。

我编写了以下代码:

# List of values to filter by
x_list = [1, 2, 3]

with beam.Pipeline(options=PipelineOptions().from_dictionary(pipeline_params)) as p:
    # Read in newline JSON data - each line is a dictionary
    log_data = (
        p 
        | "Create " + input_file >> beam.io.textio.ReadFromText(input_file)
        | "Load " + input_file >> beam.FlatMap(lambda x: json.loads(x))
    )
    
    # For each value in x_list, filter log_data for dictionaries containing the value & write out to separate file
    for i in x_list:
        # Return dictionary if given key = value in filter
        filtered_log = log_data | "Filter_"+i >> beam.Filter(lambda x: x['key'] == i)
        # Do additional processing
        processed_log = process_pcoll(filtered_log, event)
        # Write final file
        output = (
            processed_log
            | 'Dump_json_'+filename >> beam.Map(json.dumps)
            | "Save_"+filename >> beam.io.WriteToText(output_fp+filename,num_shards=0,shard_name_template="")
        )

目前它只处理列表中的第一个值。我知道我可能必须使用Pardo,但我不太确定如何在我的流程中考虑这一点。


解决方案

您可以在Beam中使用TaggedOutput。编写一个BEAM函数,它将标记PCollection中的每个元素。

import uuid
import apache_beam as beam
import dateutil.parser
from apache_beam.pvalue import TaggedOutput

class TagData(beam.DoFn):
    def process(self, element):
        key = element.get('key')   
        yield TaggedOutput(key, element)
        


processed_tagged_log = processed_log | "tagged-data-by-key " >> beam.ParDo(TagData()).with_outputs(*x_list)  

现在您可以将此输出写入单独的文件/表

# Write files to separate tables/files
    for key in x_list:
        processed_tagged_log[key] | "save file %s" % uuid.uuid4()>> beam.io.WriteToText(output_fp+key+filename,num_shards=0,shard_name_template="")
        

来源: https://beam.apache.org/documentation/sdks/pydoc/2.0.0/_modules/apache_beam/pvalue.html

相关文章