Spring Batch - How to reads 5 million records in faster ways?

感情迁移 提交于 2020-06-07 03:57:30

问题


I'm developing Spring Boot v2.2.5.RELEASE and Spring Batch example. In this example, I'm reading 5 million records using JdbcPagingItemReader from Postgres system from one data-center and writing in into MongoDB into another data-center.

This migration is too slow and need to make the better performance of this batch job. I 'm not sure on how to use partition, because I have a PK in that table holds UUID values, so I can't think of using ColumnRangePartitioner. Is there any best approach to implement this?

Approach-1:

@Bean
public JdbcPagingItemReader<Customer> customerPagingItemReader(){
    // reading database records using JDBC in a paging fashion
    JdbcPagingItemReader<Customer> reader = new JdbcPagingItemReader<>();
    reader.setDataSource(this.dataSource);
    reader.setFetchSize(1000);
    reader.setRowMapper(new CustomerRowMapper());

    // Sort Keys
    Map<String, Order> sortKeys = new HashMap<>();
    sortKeys.put("cust_id", Order.ASCENDING);

    // POSTGRES implementation of a PagingQueryProvider using database specific features.
    PostgresPagingQueryProvider queryProvider = new PostgresPagingQueryProvider();
    queryProvider.setSelectClause("*");
    queryProvider.setFromClause("from customer");
    queryProvider.setSortKeys(sortKeys);

    reader.setQueryProvider(queryProvider);

    return reader;
}

Then Mongo writer, I've used Spring Data Mongo as custom writer:

Job details

@Bean
    public Job multithreadedJob() {
        return this.jobBuilderFactory.get("multithreadedJob")
                .start(step1())
                .build();
    }

    @Bean
    public Step step1() {
        ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
        taskExecutor.setCorePoolSize(4);
        taskExecutor.setMaxPoolSize(4);
        taskExecutor.afterPropertiesSet();

        return this.stepBuilderFactory.get("step1")
                .<Transaction, Transaction>chunk(100)
                .reader(fileTransactionReader(null))
                .writer(writer(null))
                .taskExecutor(taskExecutor)
                .build();
    }

Approach-2: AsyncItemProcessor and AsyncItemWriter would be the better option, because still I've to read using same JdbcPagingItemReader?

Approach-3: Partition, how to use it where I've PK as UUID?


回答1:


Partitioning (approach 3) is the best option IMO. If your primary key is a String, you can try to create a compound key (aka a combination of columns to make up a unique key).



来源:https://stackoverflow.com/questions/61827296/spring-batch-how-to-reads-5-million-records-in-faster-ways

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!