spring-batch

How to java-configure separate datasources for spring batch data and business data? Should I even do it?

北城以北 提交于 2019-11-27 03:50:47
My main job does only read operations and the other one does some writing but on MyISAM engine which ignores transactions, so I wouldn't require necessarily transaction support. How can I configure Spring Batch to have its own datasource for the JobRepository , separate from the one holding the business data? The initial one datasource-configurations is done like the following: @Configuration public class StandaloneInfrastructureConfiguration { @Autowired Environment env; @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { LocalContainerEntityManagerFactoryBean em =

Spring-Batch without persisting metadata to database?

最后都变了- 提交于 2019-11-27 03:24:00
I want to create a spring-batch job, but I want to run it without any database persistence. Unfortunately spring-batch requires to write metadata ob the job cycles to a database somehow, thus procing me to provide at least some kind of db with transactionmanager and entitymanager. It it possible to prevent the metadata and run independently from txmanagers and databases? Update: ERROR org.springframework.batch.core.job.AbstractJob: Encountered fatal error executing job java.lang.NullPointerException at org.springframework.batch.core.repository.dao.MapJobExecutionDao.synchronizeStatus

Spring-Batch Multi-line record Item Writer with variable number of lines per record

早过忘川 提交于 2019-11-27 02:57:23
问题 I have the below requirement but am not able to decide on the approach to take: I need to write data to a fixed format out put file where each record spans over multiple lines as seen below: 000120992599999990000000000000009291100000000000000000000000010000 000000000000000000000006050052570009700000050000990494920000111100 ABCDE:WXYZ 0200 descriptiongoesheredescriptiongoesheredescriptiongoesher0200 descriptiongoesheredescriptiongoesheredescriptiongoesher0200

Spring Batch - Reading a large flat file - Choices to scale horizontally?

浪尽此生 提交于 2019-11-27 02:29:57
问题 I have started researching Spring Batch in the last hour or two. And require your inputs. The problem : Read a/multiple csv file(s) with 20 million data, perform minor processing, store it in db and also write output to another flat file in the least time. Most important : I need to make choices which will scale horizontally in the future. Questions : Use Remote Chunking or Partitioning to scale horizontally? Since data is in a flat file both Remote Chunking and Partitioning are bad choices?

Spring batch - running multiple jobs in parallel

馋奶兔 提交于 2019-11-27 02:27:39
问题 I am new to Spring batch and couldn't figure out how to do this.. Basically I have a spring file poller which runs every N mins to look for files with some name (ex: A.txt & B.txt) in certain directory. At any moment in time, there could be max 2 files in this directory (A and B). Through Spring Batch Job, these two files will be processed and persisted to 2 different DB tables. These files are somewhat similar, so the same processor/writer is used. Right now the way I set up, every polling

Spring batch Job read from multiple sources

那年仲夏 提交于 2019-11-27 01:37:35
How can I read items from multiples databases? I already know that is possible from files. the following example works for read from multiples files ... <job id="readMultiFileJob" xmlns="http://www.springframework.org/schema/batch"> <step id="step1"> <tasklet> <chunk reader="multiResourceReader" writer="flatFileItemWriter" commit-interval="1" /> </tasklet> </step> </job> ... <bean id="multiResourceReader" class=" org.springframework.batch.item.file.MultiResourceItemReader"> <property name="resources" value="file:csv/inputs/domain-*.csv" /> <property name="delegate" ref="flatFileItemReader" />

Need understanding of spring.handlers and spring.schemas

对着背影说爱祢 提交于 2019-11-27 01:21:43
问题 I have some questions derived from a problem that I have already solved through this other question. However, I am still wondering about the root cause. My questions are as follows: What is the purpose of spring.handlers and spring.schemas ? As I understand it's a way of telling the Spring Framework where to locate the xsd so that everything is wired and loaded correctly. But... Under what circumstances should I have those two files under the META-INF folder? In my other question linked above

ApplicationContextException: Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean

*爱你&永不变心* 提交于 2019-11-27 01:17:54
问题 I have written a spring batch application using Spring boot. When I am trying to run that application using command line and classpath on my local system it is running fine. However, when I tried to run it on linux server it is giving me following exception Unable to start web server; nested exception is org.springframework.context.ApplicationContextException: Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean. Below is the way I am running it: java

How Spring Boot run batch jobs

佐手、 提交于 2019-11-27 00:51:50
问题 I followed this sample for Spring Batch with Boot. When you run the main method the job is executed. This way I can't figure out how one can control the job execution. For example how you schedule a job, or get access to the job execution, or set job parameters. I tried to register my own JobLauncher @Bean public JobLauncher jobLauncher(JobRepository jobRepo){ SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher(); simpleJobLauncher.setJobRepository(jobRepo); return simpleJobLauncher;

Why Spring's jdbcTemplate.batchUpdate() so slow?

丶灬走出姿态 提交于 2019-11-27 00:37:42
问题 I'm trying to find the faster way to do batch insert . I tried to insert several batches with jdbcTemplate.update(String sql) , where sql was builded by StringBuilder and looks like: INSERT INTO TABLE(x, y, i) VALUES(1,2,3), (1,2,3), ... , (1,2,3) Batch size was exactly 1000. I inserted nearly 100 batches. I checked the time using StopWatch and found out insert time: min[38ms], avg[50ms], max[190ms] per batch I was glad but I wanted to make my code better. After that, I tried to use