Does Spark respect kudu's hash partitioning similar to bucketed joins on parquet tables?

后端 未结 0 1029
萌比男神i
萌比男神i 2021-01-06 07:35

I\'m trying out Kudu with Spark. I want to join 2 tables with the following schema-

# This table has around 1 million records
TABLE dimensions (
    id INT32          


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题