How to delete rows in a table created from a Spark dataframe?

不羁岁月 提交于 2021-02-07 07:01:47

问题


Basically, I would like to do a simple delete using SQL statements but when I execute the sql script it throws me the following error:

pyspark.sql.utils.ParseException: u"\nmissing 'FROM' at 'a'(line 2, pos 23)\n\n== SQL ==\n\n DELETE a.* FROM adsquare a \n-----------------------^^^\n"

These is the script that I'm using:

sq = SparkSession.builder.config('spark.rpc.message.maxSize','1536').config("spark.sql.shuffle.partitions",str(shuffle_value)).getOrCreate()
adsquare = sq.read.csv(f, schema=adsquareSchemaDevice , sep=";", header=True)
adsquare_grid = adsqaureJoined.select("userid", "latitude", "longitude").repartition(1000).cache()
adsquare_grid.createOrReplaceTempView("adsquare")   

sql = """
    DELETE a.* FROM adsquare a
    INNER JOIN codepoint c ON a.grid_id = c.grid_explode
    WHERE dis2 > 1 """

sq.sql(sql)

Note: The codepoint table is created during the execution.

Is there any other way I can delete the rows with the above conditions?


回答1:


You can not delete rows from Data Frame. But you can create new Data Frame which exclude unwanted records.

sql = """
    Select a.* FROM adsquare a
    INNER JOIN codepoint c ON a.grid_id = c.grid_explode
    WHERE dis2 <= 1 """

sq.sql(sql)

In this way you can create new data frame. Here I used reverse condition dis2 <= 1




回答2:


Dataframes in Apache Spark are immutable. SO you cannot change it, to delete rows from data frame you can filter the row that you do not want and save in another dataframe.




回答3:


You can not delete rows from Data Frame because Hadoop follow WORM( write once read many times) instead you can filter out the deleted records in the SQL statement will give you the new data frame.



来源:https://stackoverflow.com/questions/43515193/how-to-delete-rows-in-a-table-created-from-a-spark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!