Split Time Series pySpark data frame into test & train without using random split

扶醉桌前 提交于 2019-11-28 05:34:20

问题


I have a spark Time Series data frame. I would like to split it into 80-20 (train-test). As this is a time series data frame, I don't want to do a random split. How do I do this in order to pass the first data frame into train and the second to test?


回答1:


You can use pyspark.sql.functions.percent_rank() to get the percentile ranking of your DataFrame ordered by the timestamp/date column. Then pick all the columns with a rank <= 0.8 as your training set and the rest as your test set.

For example, if you had the following DataFrame:

df.show(truncate=False)
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-01 00:00:00.0|0  |
#|2018-01-02 00:00:00.0|1  |
#|2018-01-03 00:00:00.0|2  |
#|2018-01-04 00:00:00.0|3  |
#|2018-01-05 00:00:00.0|4  |
#+---------------------+---+

You'd want the first 4 rows in your training set and the last one in your training set. First add a column rank:

from pyspark.sql.functions import percent_rank
from pyspark.sql import Window

df = df.withColumn("rank", percent_rank().over(Window.partitionBy().orderBy("date")))

Now use rank to split your data into train and test:

train_df = df.where("rank <= .8").drop("rank")
train_df.show()
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-01 00:00:00.0|0  |
#|2018-01-02 00:00:00.0|1  |
#|2018-01-03 00:00:00.0|2  |
#|2018-01-04 00:00:00.0|3  |
#+---------------------+---+

test_df = df.where("rank > .8").drop("rank")
test_df.show()
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-05 00:00:00.0|4  |
#+---------------------+---+


来源:https://stackoverflow.com/questions/51772908/split-time-series-pyspark-data-frame-into-test-train-without-using-random-spli

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!