问题 I am new to Spark, and am working on creating a DataFrame from a Postgres database table via JDBC, using spark.read.jdbc . I am a bit confused about the partitioning options, in particular partitionColumn , lowerBound , upperBound , and numPartitions . The documentation seems to indicate that these fields are optional. What happens if I don't provide them? How does Spark know how to partition the queries? How efficient will that be? If I DO specify these options, how do I ensure that the