Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整个事务还是只是有问题的行?
Pyspark DataFrameWriter
类有一个 jdbc
函数 用于将数据帧写入 sql.这个函数有一个 --ignore
选项,文档说:
The Pyspark DataFrameWriter
class has a jdbc
function for writing a dataframe to sql. This function has an --ignore
option that the documentation says will:
如果数据已经存在,则静默忽略此操作.
Silently ignore this operation if data already exists.
但是它会忽略整个事务,还是只会忽略插入重复的行?如果我将 --ignore
与 --append
标志结合起来会怎样?行为会改变吗?
But will it ignore the entire transaction, or will it only ignore inserting the rows that are duplicates? What if I were to combine --ignore
with the --append
flag? Would the behavior change?
推荐答案
mode("ingore")
如果表(或另一个接收器)已经存在并且无法组合写入模式,则只是 NOOP.如果您正在寻找诸如 INSERT IGNORE
或 INSERT INTO ... WHERE NOT EXISTS ...
之类的内容,则必须手动执行,例如使用 mapPartitions
.
mode("ingore")
is just NOOP if table (or another sink) already exists and writing modes cannot be combined. If you're looking for something like INSERT IGNORE
or INSERT INTO ... WHERE NOT EXISTS ...
you'll have to do it manually, for example with mapPartitions
.
相关文章