In Pyspark HiveContext what is the equivalent of SQL OFFSET?

亡梦爱人 提交于 2019-12-20 06:15:23

问题


Or a more specific question would be how can I process large amounts of data that do not fit into memory at once? With OFFSET I was trying to do hiveContext.sql("select ... limit 10 offset 10") while incrementing offset to get all the data but offset doesn't seem to be valid within hiveContext. What is the alternative usually used to achieve this goal?

For some context the pyspark code starts with

from pyspark.sql import HiveContext
hiveContext = HiveContext(sc)
hiveContext.sql("select ... limit 10 offset 10").show()

回答1:


You code will look like

  from pyspark.sql import HiveContext
hiveContext = HiveContext(sc)
hiveContext.sql("    with result as
 (   SELECT colunm1 ,column2,column3, ROW_NUMBER() OVER (ORDER BY columnname) AS RowNum FROM tablename )
select colunm1 ,column2,column3 from result where RowNum >= OFFSEtvalue and  RowNum < (OFFSEtvalue +limtvalue ").show()

Note: Update below variables according your requirement tcolunm1 , tablename, OFFSEtvalue, limtvalue



来源:https://stackoverflow.com/questions/42560815/in-pyspark-hivecontext-what-is-the-equivalent-of-sql-offset

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!