Spark : how can i create local dataframe in each executor

怎甘沉沦 提交于 2019-12-19 10:28:42

问题


In spark scala is there a way to create local dataframe in executors like pandas in pyspark. In mappartitions method i want to convert iterator to local dataframe (like pandas dataframe in python) so that dataframe features can be used instead of hand coding them on iterators.


回答1:


That is not possible.

Dataframe is a distributed collection in Spark. And Dataframes can only be created on driver node (i.e. outside of transformations/actions).

Additionally, in Spark you cannot execute operations on RDDs/Dataframes/Datasets inside other operations: e.g. following code will produce errors.

rdd.map(v => rdd1.filter(e => e == v))

DF and DS also have RDDs underneath, so same behavior there.



来源:https://stackoverflow.com/questions/48715661/spark-how-can-i-create-local-dataframe-in-each-executor

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!