Spark does not push the filter down to PostgreSQL data source when reading data in parallel providing values for lower bound and upper bound

前端 未结 0 558
青春惊慌失措
青春惊慌失措 2021-02-01 13:36

I am trying to read the data from the PostgreSQL table in parallel. I am using the timestamp column as the partition column and providing the values for the lower bound, upper b

相关标签:
回答
  • 消灭零回复
提交回复
热议问题