问题
Steps:
- Read "Select x, y, z from TABLE_1" from Database1 into a ResultSet.
- pass ResultSet to a Writer
- Write all records returned by the ResultSet to TABLE_2 in Database2.
Requirement:
- Do not create any unused Objects to hold the data after reading from the ResultSet. (i.e. no Table1.class)
- Use as much pre-built functionality as possible from the SPRING-Batch framework.
- No DB Link.
NOTE: Class names for me to reference are enough to get me on the right path.
回答1:
assuming you use JdbcPagingItemReader and JdbcBatchItemWriter you can use:
- the ColumnRowMapper from spring-jdbc
- an self implemented ItemSqlParameterSourceProvider
回答2:
Your wish to save on memory allocations are clear but think twice if your desire for maximum optimization is worse side effects and problems.
First of all, if you just want to read rows from table A and write them to table B without any transformation of the data, then Spring Batch is not the best choice. You wish to use Spring Batch in this scenario perhaps if you want to retry (using RetryTemplate) in case some exception occurred during writing, or you want to skip certain exceptions (e.g. DataIntegrityViolationException = ignore duplicate entries).
So what you can do (but that is not very good approach) is to use Flyweight objects, e.g. the object that you return to framework is always the same, however it is each time filled with new contents (the code is not tested, AS IS):
package org.epo.lifesciences.chepo.service;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import javax.sql.DataSource;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.item.ItemStreamException;
import org.springframework.batch.item.support.AbstractItemStreamItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.datasource.DataSourceUtils;
import org.springframework.jdbc.support.JdbcUtils;
public class FlyweightItemReader extends AbstractItemStreamItemReader<Object[]> {
@Autowired
private DataSource dataSource;
/*
* State objects
*/
private Connection con;
private Statement stmt;
private ResultSet rs;
private Object[] row;
/**
* @see org.springframework.batch.item.ItemStreamSupport#open(org.springframework.batch.item.ExecutionContext)
*/
@Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
row = null;
con = DataSourceUtils.getConnection(dataSource);
try {
stmt = con.createStatement();
rs = stmt.executeQuery("some sql");
}
catch (SQLException e) {
DataSourceUtils.releaseConnection(con, dataSource);
throw new ItemStreamException(e);
}
}
/**
* @see org.springframework.batch.item.ItemStreamSupport#close()
*/
@Override
public void close() {
DataSourceUtils.releaseConnection(con, dataSource);
JdbcUtils.closeResultSet(rs);
JdbcUtils.closeStatement(stmt);
JdbcUtils.closeConnection(con);
}
/**
* @see org.springframework.batch.item.ItemReader#read()
*/
public Object[] read() throws SQLException {
if (!rs.next()) {
// End of result set is reached:
return null;
}
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
if (row == null && columnCount > 0) {
// Create the flyweight:
row = new Object[columnCount];
}
// Copy all column values to flyweight:
for (int i = 1; i <= columnCount; i++) {
row[i - 1] = JdbcUtils.getResultSetValue(rs, i);
}
return row;
}
}
Be aware about this approach: it works only if your batch size is 1 (otherwise you end up with N equal objects in a batch) and only if your reader has prototype scope (because it is stateful).
来源:https://stackoverflow.com/questions/8144052/how-to-write-from-one-database-to-another-using-spring-batch-w-out-translating-t