在 Apache Spark 2.0.0 中,是否可以从外部数据库获取查询(而不是获取整个表)?

2021-11-20 00:00:00 pyspark apache-spark mysql jdbc

使用pyspark:

from pyspark.sql import SparkSession

spark = SparkSession\
    .builder\
    .appName("spark play")\
    .getOrCreate()    

df = spark.read\
    .format("jdbc")\
    .option("url", "jdbc:mysql://localhost:port")\
    .option("dbtable", "schema.tablename")\
    .option("user", "username")\
    .option("password", "password")\
    .load()

我宁愿获取查询的结果集,而不是获取schema.tablename".

Rather than fetch "schema.tablename", I would prefer to grab the result set of a query.

推荐答案

同 1.x 可以传递有效的子查询作为 dbtable 参数例如:

Same as in 1.x you can pass valid subquery as dbtable argument for example:

...
.option("dbtable", "(SELECT foo, bar FROM schema.tablename) AS tmp")
...

相关文章