Beam/Dataflow 自定义 Python 作业 - 云存储到 PubSub

问题描述

I need to perform a very simple transformation on some data (extract a string from JSON), then write it to PubSub - I'm attempting to use a custom python Dataflow job to do so.

I've written a job which successfully writes back to Cloud Storage, but my attempts at even the simplest possible write to PubSub (no transformation) result in an error: JOB_MESSAGE_ERROR: Workflow failed. Causes: Expected custom source to have non-zero number of splits.

Has anyone successfully written to PubSub from GCS via Dataflow?

Can anyone shed some light on what is going wrong here?


def run(argv=None):

  parser = argparse.ArgumentParser()
  parser.add_argument('--input',
                      dest='input',
                      help='Input file to process.')
  parser.add_argument('--output',
                      dest='output',                      
                      help='Output file to write results to.')
  known_args, pipeline_args = parser.parse_known_args(argv)

  pipeline_options = PipelineOptions(pipeline_args)
  pipeline_options.view_as(SetupOptions).save_main_session = True
  with beam.Pipeline(options=pipeline_options) as p:

    lines = p | ReadFromText(known_args.input)

    output = lines #Obviously not necessary but this is where my simple extract goes

    output | beam.io.WriteToPubSub(known_args.output) # This doesn't

解决方案

Currently it isn't possible to achieve this scenario because when you are using streaming mode in Dataflow, the only source you can use is PubSub. And you can't switch to batch mode because the apache beam PubSub sources and sinks are only available for streaming (for remote execution like the Dataflow runner).

That is the reason why you can execute your pipeline without the WriteToPubSub and streaming flag.

相关文章