Is there a way to limit the number of records fetched from the jdbc source using spark sql 2.2.0?
I am dealing with a task of moving (and transforming) a large number of
This approach is a little bit bad for relational databases. The load function of spark will request your full table, store in memory/disk and then will do the RDD transformations and executions.
If you want to do an exploratory work, I will suggest you to store this data in your first load. There a few ways to do that. Take your code and do like this:
val sourceData = spark
.read
.format("jdbc")
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("url", jdbcSqlConnStr)
.option("dbtable", sourceTableName)
.load()
sourceData.write
.option("header", "true")
.option("delimiter", ",")
.format("csv")
.save("your_path")
This will allow you to save your data in your local machine as CSV, the most common format that you can work with any language for exploration. Everytime that you want to load this, take this data from this file. If you want real time analysis, or any other thing like this. I will suggest you build a pipeline with the transformations of the data to update another storage. Using this approach to process your data of loading from your db every time is not good.