Spark 1.6: filtering DataFrames generated by describe()

a 夏天 提交于 2019-12-05 02:44:43

I have considered a toy dataset I had containing some health disease data

val stddev_tobacco = rawData.describe().rdd.map{ 
    case r : Row => (r.getAs[String]("summary"),r.get(1))
}.filter(_._1 == "stddev").map(_._2).collect

You can select from the dataframe:

from pyspark.sql.functions import mean, min, max
df.select([mean('uniform'), min('uniform'), max('uniform')]).show()
+------------------+-------------------+------------------+
|      AVG(uniform)|       MIN(uniform)|      MAX(uniform)|
+------------------+-------------------+------------------+
|0.5215336029384192|0.19657711634539565|0.9970412477032209|
+------------------+-------------------+------------------+

You can also register it as a table and query the table:

val t = x.describe()
t.registerTempTable("dt")

%sql 
select * from dt

Another option would be to use selectExpr() which also runs optimized, e.g. to obtain the min:

myDataFrame.selectExpr('MIN(count)').head()[0]
kvk
myDataFrame.describe().filter($"summary"==="stddev").show()

This worked quite nicely on Spark 2.3.0

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!