Moving Spark DataFrame from Python to Scala whithn Zeppelin

我怕爱的太早我们不能终老 提交于 2019-12-04 13:00:23

问题


I created a spark DataFrame in a Python paragraph in Zeppelin.

sqlCtx = SQLContext(sc)
spDf = sqlCtx.createDataFrame(df)

and df is a pandas dataframe

print(type(df))
<class 'pandas.core.frame.DataFrame'>

what I want to do is moving spDf from one Python paragraph to another Scala paragraph. It look a reasonable way to do is using z.put.

z.put("spDf", spDf)

and I got this error:

AttributeError: 'DataFrame' object has no attribute '_get_object_id'

Any suggestion to fix the error? Or any suggestion to move spDf?


回答1:


You canput internal Java object not a Python wrapper:

%pyspark

df = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"])
z.put("df", df._jdf)

and then make sure you use correct type:

val df = z.get("df").asInstanceOf[org.apache.spark.sql.DataFrame]
// df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]

but it is better to register temporary table:

%pyspark

# registerTempTable in Spark 1.x
df.createTempView("df")

and use SQLContext.table to read it:

// sqlContext.table in Spark 1.x
val df = spark.table("df")
df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]

To convert in the opposite direction see Zeppelin: Scala Dataframe to python



来源:https://stackoverflow.com/questions/37263300/moving-spark-dataframe-from-python-to-scala-whithn-zeppelin

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!