Is this use case suitable for Spark (iterations)?

后端 未结 0 1096
北恋
北恋 2020-12-18 01:55

I have approximately a dataset (df1) of 130 million rows. The dataset has approximately 90 columns but by this case we only need four of them, so since the

相关标签:
回答
  • 消灭零回复
提交回复
热议问题