Why does df.limit keep changing in Pyspark?

你说的曾经没有我的故事 提交于 2019-12-04 19:27:46

问题


I'm creating a data sample from some dataframe df with

rdd = df.limit(10000).rdd

This operation takes quite some time (why actually? can it not short-cut after 10000 rows?), so I assume I have a new RDD now.

However, when I now work on rdd, it is different rows every time I access it. As if it resamples over again. Caching the RDD helps a bit, but surely that's not save?

What is the reason behind it?

Update: Here is a reproduction on Spark 1.5.2

from operator import add
from pyspark.sql import Row
rdd=sc.parallelize([Row(i=i) for i in range(1000000)],100)
rdd1=rdd.toDF().limit(1000).rdd
for _ in range(3):
    print(rdd1.map(lambda row:row.i).reduce(add))

The output is

499500
19955500
49651500

I'm surprised that .rdd doesn't fix the data.

EDIT: To show that it get's more tricky than the re-execution issue, here is a single action which produces incorrect results on Spark 2.0.0.2.5.0

from pyspark.sql import Row
rdd=sc.parallelize([Row(i=i) for i in range(1000000)],200)
rdd1=rdd.toDF().limit(12345).rdd
rdd2=rdd1.map(lambda x:(x,x))
rdd2.join(rdd2).count()
# result is 10240 despite doing a self-join

Basically, whenever you use limit your results might be potentially wrong. I don't mean "just one of many samples", but really incorrect (since in the case the result should always be 12345).


回答1:


Because Spark is distributed, in general it's not safe to assume deterministic results. Your example is taking the "first" 10,000 rows of a DataFrame. Here, there's ambiguity (and hence non-determinism) in what "first" means. That will depend on the internals of Spark. For example, it could be the first partition that responds to the driver. That partition could change with networking, data locality, etc.

Even once you cache the data, I still wouldn't rely on getting the same data back every time, though I certainly would expect it to be more consistent than reading from disk.




回答2:


Spark is lazy, so each action you take recalculates the data returned by limit(). If the underlying data is split across multiple partitions, then every time you evaluate it, limit might be pulling from a different partition (i.e. if your data is stored across 10 Parquet files, the first limit call might pull from file 1, the second from file 7, and so on).




回答3:


Once the rdd is set, it does not re-sample. It's hard to give you any concrete feedback here without seeing much of your code or data, but you can easily prove that rdds do not re-sample by going into the pyspark shell and doing the following:

>>> d = [{'name': 'Alice', 'age': 1, 'pet': 'cat'}, {'name': 'Bob', 'age': 2, 'pet': 'dog'}]
>>> df = sqlContext.createDataFrame(d)
>>> rdd = df.limit(1).rdd

Now you can just repeatedly print out the contents of the rdd with some print function

>>> def p(x):
...    print x
...

Your output will always contain the same value

>>> rdd.foreach(p)
Row(age=1, name=u'Alice', pet=u'cat')
>>> rdd.foreach(p)
Row(age=1, name=u'Alice', pet=u'cat')
>>> rdd.foreach(p)
Row(age=1, name=u'Alice', pet=u'cat')

I would advise that you either check your code or data



来源:https://stackoverflow.com/questions/37147032/why-does-df-limit-keep-changing-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!