How to get a value from the Row object in Spark Dataframe?

£可爱£侵袭症+ 提交于 2019-12-17 16:34:22

问题


for

averageCount = (wordCountsDF
                .groupBy().mean()).head()

I get

Row(avg(count)=1.6666666666666667)

but when I try:

averageCount = (wordCountsDF
                .groupBy().mean()).head().getFloat(0)

I get the following error:

AttributeError: getFloat --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 1 # TODO: Replace with appropriate code ----> 2 averageCount = (wordCountsDF 3 .groupBy().mean()).head().getFloat(0) 4 5 print averageCount

/databricks/spark/python/pyspark/sql/types.py in getattr(self, item) 1270 raise AttributeError(item) 1271
except ValueError: -> 1272 raise AttributeError(item) 1273 1274 def setattr(self, key, value):

AttributeError: getFloat

What am I doing wrong?


回答1:


I figured it out. This will return me the value:

averageCount = (wordCountsDF
                .groupBy().mean()).head()[0]



回答2:


This also works:

averageCount = (wordCountsDF
                .groupBy().mean('count').collect())[0][0]
print averageCount



回答3:


Dataframe rows are inherited from namedtuples (from the collections library), so while you can index them like a traditional tuple the way you did above, you probably want to access it by the name of its fields. That is, after all, the point of named tuples, and it is also more robust to future changes. Like this:

averageCount = wordCountsDF.groupBy().mean().head()['avg(jobs)']


来源:https://stackoverflow.com/questions/37999657/how-to-get-a-value-from-the-row-object-in-spark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!