问题
The example is as follows:
df=spark.createDataFrame([
(1,"2017-05-15 23:12:26",2.5),
(1,"2017-05-09 15:26:58",3.5),
(1,"2017-05-18 15:26:58",3.6),
(2,"2017-05-15 15:24:25",4.8),
(3,"2017-05-25 15:14:12",4.6)],["index","time","val"]).orderBy("index","time")
df.collect()
+-----+-------------------+---+
|index| time|val|
+-----+-------------------+---+
| 1|2017-05-09 15:26:58|3.5|
| 1|2017-05-15 23:12:26|2.5|
| 1|2017-05-18 15:26:58|3.6|
| 2|2017-05-15 15:24:25|4.8|
| 3|2017-05-25 15:14:12|4.6|
+-----+-------------------+---+
for the function "pyspark.sql.functions"
window(timeColumn, windowDuration, slideDuration=None, startTime=None)
timeColumn:The time column must be of TimestampType.
windowDuration: Durations are provided as strings, e.g. '1 second', '1 day 12 hours', '2 minutes'. Valid
interval strings are 'week', 'day', 'hour', 'minute', 'second', 'millisecond', 'microsecond'.
slideDuration: If the 'slideDuration' is not provided, the windows will be tumbling windows.
startTime: the startTime is the offset with respect to 1970-01-01 00:00:00 UTC with which to start window intervals. For example, in order to have hourly tumbling windows that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15... provide `startTime` as `15 minutes`.
I want to count the parameters "val" in this function for every 5 days, and I set the parameter "slideDuration" a string value with "5 day"
timeColumn="time",windowDuration="5 day",slideDuration="5 day"
the codes as follows:
df2=df.groupBy("index",F.window("time",windowDuration="5 day",slideDuration="5 day")).agg(F.sum("val").alias("sum_val"))
When I get the value of the parameter "window.start", the time didn't start with the minimal time I give in the column "time" or the time I've set before,but the other time from no where.
The results came out as follows:
+-----+---------------------+---------------------+-------+
|index|start |end |sum_val|
+-----+---------------------+---------------------+-------+
|1 |2017-05-09 08:00:00.0|2017-05-14 08:00:00.0|3.5 |
|1 |2017-05-14 08:00:00.0|2017-05-19 08:00:00.0|6.1 |
|2 |2017-05-14 08:00:00.0|2017-05-19 08:00:00.0|4.8 |
|3 |2017-05-24 08:00:00.0|2017-05-29 08:00:00.0|4.6 |
+-----+---------------------+---------------------+-------+
When I set a value for the parameter "startTime" with '0 second'(the codes are as follows):
df2=df.groupBy("index",F.window("time",windowDuration="5 day",slideDuration="5 day",startTime="0 second")).agg(F.sum("val").alias("sum_val"))
+-----+---------------------+---------------------+-------+
|index|start |end |sum_val|
+-----+---------------------+---------------------+-------+
|1 |2017-05-09 08:00:00.0|2017-05-14 08:00:00.0|3.5 |
|1 |2017-05-14 08:00:00.0|2017-05-19 08:00:00.0|6.1 |
|2 |2017-05-14 08:00:00.0|2017-05-19 08:00:00.0|4.8 |
|3 |2017-05-24 08:00:00.0|2017-05-29 08:00:00.0|4.6 |
+-----+---------------------+---------------------+-------+
The results came out that it still didn't start with the minimal time in the column "time"
So how should I make this function start with the minimal time in the column "time",or the time I set at the first time,such as"2017-05-09 15:25:30",I'm so thankful for you to figure me out of this question
The official introduction of the 'startTime' as follows
The startTime is the offset with respect to 1970-01-01 00:00:00 UTC with which to start window intervals.
For example, in order to have hourly tumbling windows that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15...
provide `startTime` as `15 minutes`.
References are as follows
1.What does the 'pyspark.sql.functions.window' function's 'startTime' argument do?
2.https://github.com/apache/spark/pull/12008
3.http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=window#pyspark.sql.functions.window
回答1:
The problem you experience is completely unrelated to startTime
and has two components:
Spark's timestamp semantics where timestamps are always handled in respect to local timezone. Based on the offset shown in the output we conclude that JVM uses GMT+8 or equivalent timezone. Please consider these two scenarios:
>>> from pyspark.sql.functions import window >>> >>> spark.conf.get("spark.driver.extraJavaOptions") '-Duser.timezone=GMT+8' >>> spark.conf.get("spark.executor.extraJavaOptions") '-Duser.timezone=GMT+8' >>> str(spark.sparkContext._jvm.java.util.TimeZone.getDefault()) 'sun.util.calendar.ZoneInfo[id="GMT+08:00",offset=28800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null]' >>> >>> df = spark.createDataFrame([(1,"2017-05-15 23:12:26",2.5)], ["index","time","val"]) >>> (df ... .withColumn("w", window("time" ,windowDuration="5 days" ,slideDuration="5 days")) ... .show(1, False)) ... +-----+-------------------+---+---------------------------------------------+ |index|time |val|w | +-----+-------------------+---+---------------------------------------------+ |1 |2017-05-15 23:12:26|2.5|[2017-05-14 08:00:00.0,2017-05-19 08:00:00.0]| +-----+-------------------+---+---------------------------------------------+
vs.
>>> from pyspark.sql.functions import window >>> >>> spark.conf.get("spark.driver.extraJavaOptions") '-Duser.timezone=UTC' >>> spark.conf.get("spark.executor.extraJavaOptions") '-Duser.timezone=UTC' >>> str(spark.sparkContext._jvm.java.util.TimeZone.getDefault()) 'sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null]' >>> >>> df = spark.createDataFrame([(1,"2017-05-15 23:12:26",2.5)], ["index","time","val"]) >>> (df ... .withColumn("w", window("time" ,windowDuration="5 days" ,slideDuration="5 days")) ... .show(1, False)) ... +-----+-------------------+---+---------------------------------------------+ |index|time |val|w | +-----+-------------------+---+---------------------------------------------+ |1 |2017-05-15 23:12:26|2.5|[2017-05-14 00:00:00.0,2017-05-19 00:00:00.0]| +-----+-------------------+---+---------------------------------------------+
As you can see output is adjusted according to local timezone, while input string is parsed as UTC timestamp.
window
semantics. If you take a look at the execution plan>>> df.withColumn("w", window("time",windowDuration="5 days",slideDuration="5 days")).explain(False) == Physical Plan == *Project [index#21L, time#22, val#23, window#68 AS w#67] +- *Filter (((isnotnull(time#22) && isnotnull(window#68)) && (cast(time#22 as timestamp) >= window#68.start)) && (cast(time#22 as timestamp) < window#68.end)) +- *Expand [List(named_struct(start, ((((CEIL((cast((precisetimestamp(cast(time#22 as timestamp)) - 0) as double) / 4.32E11)) + 0) - 1) * 432000000000) + 0), end, ((((CEIL((cast((precisetimestamp(cast(time#22 as timestamp)) - 0) as double) / 4.32E11)) + 0) - 1) * 432000000000) + 432000000000)), index#21L, time#22, val#23), List(named_struct(start, ((((CEIL((cast((precisetimestamp(cast(time#22 as timestamp)) - 0) as double) / 4.32E11)) + 1) - 1) * 432000000000) + 0), end, ((((CEIL((cast((precisetimestamp(cast(time#22 as timestamp)) - 0) as double) / 4.32E11)) + 1) - 1) * 432000000000) + 432000000000)), index#21L, time#22, val#23)], [window#68, index#21L, time#22, val#23] +- Scan ExistingRDD[index#21L,time#22,val#23]
and focus on as single component:
((((CEIL((cast((precisetimestamp(cast(time#22 as timestamp)) - 0) as double) / 4.32E11)) + 0) - 1) * 432000000000)
you'll see that window takes a ceiling of the numeric value effectively truncating timestamp to whole intervals.
Finally startTime
in
df.groupBy("index",F.window("time",windowDuration="5 day",slideDuration="5 day",startTime="0 second"))
has no effect at all, because it behaves like default (no offset). If anything you can try:
(startTime, ) = (df
.select(min_(col("time").cast("timestamp")).alias("ts"))
.select(
((col("ts").cast("double") -
col("ts").cast("date").cast("timestamp").cast("double")
) * 1000).cast("integer"))
.first())
w = window(
"time",
windowDuration="5 days",
slideDuration="5 days",
startTime="{} milliseconds".format(startTime))
df.withColumn("w", w).show(1, False)
+-----+-------------------+---+---------------------------------------------+
|index|time |val|w |
+-----+-------------------+---+---------------------------------------------+
|1 |2017-05-15 23:12:26|2.5|[2017-05-14 23:12:26.0,2017-05-19 23:12:26.0]|
+-----+-------------------+---+---------------------------------------------
来源:https://stackoverflow.com/questions/48351951/what-does-the-pyspark-sql-functions-window-functions-starttime-argument-do-an