Partition of Timestamp column in Dataframes Pyspark

纵饮孤独 提交于 2019-12-11 04:25:30

问题


I have a DataFrame in PSspark in the below format

Date        Id  Name    Hours   Dno Dname
12/11/2013  1   sam     8       102 It
12/10/2013  2   Ram     7       102 It
11/10/2013  3   Jack    8       103 Accounts
12/11/2013  4   Jim     9       101 Marketing

I want to do partition based on dno and save as table in Hive using Parquet format.

df.write.saveAsTable(
    'default.testing', mode='overwrite', partitionBy='Dno', format='parquet')

The query worked fine and created table in Hive with Parquet input.

Now I want to do partitioned based on the year and month of the date column. The timestamp is Unix timestamp

how can we achieve that in PySpark. I have done it in hive but unable to do it PySpark


回答1:


Just extract fields you want to use and provide a list of columns as an argument to the partitionBy of the writer. If timestamp is UNIX timestamps expressed in seconds:

df = sc.parallelize([
    (1484810378, 1, "sam", 8, 102, "It"),
    (1484815300, 2, "ram", 7, 103, "Accounts")
]).toDF(["timestamp", "id", "name", "hours", "dno", "dname"])

add columns:

from pyspark.sql.functions import year, month, col

df_with_year_and_month = (df
    .withColumn("year", year(col("timestamp").cast("timestamp")))
    .withColumn("month", month(col("timestamp").cast("timestamp"))))

and write:

(df_with_year_and_month
    .write
    .partitionBy("year", "month")
    .mode("overwrite")
    .format("parquet")
    .saveAsTable("default.testing"))


来源:https://stackoverflow.com/questions/41728304/partition-of-timestamp-column-in-dataframes-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!