问题
I have a table named "mytable" in Postgres with two columns, id (bigint) and value (varchar(255)).
id gets its value from a sequence using nextval('my_sequence')
.
A PySpark application takes a dataframe and uses the Postgres JDBC jar (postgresql-42.1.4.jar) to insert the dataframe into "mytable". I'm creating the id column using:
df.withColumn('id', lit("nextval('my_sequence')"))
Postgres is interpreting the column as a 'varying character'.
I can see that there are ways for calling Postgres methods when reading data (How to remotely execute a Postgres SQL function on Postgres using PySpark JDBC connector?), but I'm not sure how to call a Postgres function like nextval()
for writing data to Postgres.
Here's how I am currently writing the data from Pyspark to Postgres:
df.write.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", 'mytable') \
.mode('append') \
.save()
How can one write to a Postgres table using PySpark when one column needs a sequence number using nextval()
?
回答1:
TL;DR You cannot execute database code on insert unless you create your own JdbcDialect
and override insert logic. I reckon it is not something you want to do for such a small feature.
Personally I would use trigger:
CREATE FUNCTION set_id() RETURNS trigger AS $set_id$
BEGIN
IF NEW.id IS NULL THEN
NEW.id = nextval('my_sequence');
END IF;
RETURN NEW;
END;
$set_id$ LANGUAGE plpgsql;
CREATE TRIGGER set_id BEFORE INSERT ON mytable
FOR EACH ROW EXECUTE PROCEDURE set_id();
and leave the rest of the job to the database server.
df.select(lit(null).cast("bigint").alias("id"), col("value")).write
...
You could also use monotonically_increasing_id
(Primary keys with Apache Spark) and just shift values according to the largest id in the database, but it might be brittle.
来源:https://stackoverflow.com/questions/48362994/how-to-use-nextval-in-a-postgres-jdbc-driver-for-pyspark