I have created a MappingsBean class where all the columns of the CSV file are specified. Next I parse XML files and create a list of mappingbeans. Then I write that data int
Try something like below:
private static class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
@Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
return header;
}
}
Then use it as follows:
String[] columns = new String[]{"Name", "Age", "Company", "Salary"};
CustomMappingStrategy<Employee> mappingStrategy = new CustomMappingStrategy<Employee>(columns);
Where columns are columns of your bean and Employee is your bean
In the latest version the solution of @Sebast26 does no longer work. However the basic is still very good. Here is a working solution with v5.0
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
import org.apache.commons.lang3.StringUtils;
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
@Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
final int numColumns = getFieldMap().values().size();
super.generateHeader(bean);
String[] header = new String[numColumns];
BeanField beanField;
for (int i = 0; i < numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(
CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
And the model looks like this:
@CsvBindByName(column = "id")
@CsvBindByPosition(position = 0)
private Long id;
@CsvBindByName(column = "name")
@CsvBindByPosition(position = 1)
private String name;
And my generation helper looks something like this:
public static <T extends AbstractCsv> String createCsv(List<T> data, Class<T> beanClazz) {
CustomMappingStrategy<T> mappingStrategy = new CustomMappingStrategy<T>();
mappingStrategy.setType(beanClazz);
StringWriter writer = new StringWriter();
String csv = "";
try {
StatefulBeanToCsv sbc = new StatefulBeanToCsvBuilder(writer)
.withSeparator(';')
.withMappingStrategy(mappingStrategy)
.build();
sbc.write(data);
csv = writer.toString();
} catch (CsvRequiredFieldEmptyException e) {
// TODO add some logging...
} catch (CsvDataTypeMismatchException e) {
// TODO add some logging...
} finally {
try {
writer.close();
} catch (IOException e) {
}
}
return csv;
}
There is another version for 5.2 version because I have a problem with @CsvCustomBindByName annotation when I tried answers above.
I defined custom annotation :
@Target(ElementType.FIELD)
@Inherited
@Retention(RetentionPolicy.RUNTIME)
public @interface CsvPosition {
int position();
}
and custom mapping strategy
public class CustomMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
private final Field[] fields;
public CustomMappingStrategy(Class<T> clazz) {
fields = clazz.getDeclaredFields();
Arrays.sort(fields, (f1, f2) -> {
CsvPosition position1 = f1.getAnnotation(CsvPosition.class);
CsvPosition position2 = f2.getAnnotation(CsvPosition.class);
return Integer.compare(position1.position(), position2.position());
});
}
@Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] header = new String[fields.length];
for (Field f : fields) {
CsvPosition position = f.getAnnotation(CsvPosition.class);
header[position.position() - 1] = getName(f);
}
headerIndex.initializeHeaderIndex(header);
return header;
}
private String getName(Field f) {
CsvBindByName csvBindByName = f.getAnnotation(CsvBindByName.class);
CsvCustomBindByName csvCustomBindByName = f.getAnnotation(CsvCustomBindByName.class);
return csvCustomBindByName != null
? csvCustomBindByName.column() == null || csvCustomBindByName.column().isEmpty() ? f.getName() : csvCustomBindByName.column()
: csvBindByName.column() == null || csvBindByName.column().isEmpty() ? f.getName() : csvBindByName.column();
}
}
My POJO beans are annotated like this
public class Record {
@CsvBindByName(required = true)
@CsvPosition(position = 1)
Long id;
@CsvCustomBindByName(required = true, converter = BoolanCSVField.class)
@CsvPosition(position = 2)
Boolean deleted;
...
}
and final code for writer :
CustomMappingStrategy<Record> mappingStrategy = new CustomMappingStrategy<>(Record.class);
mappingStrategy.setType(Record.class);
StatefulBeanToCsv beanToCsv = new StatefulBeanToCsvBuilder(writer)
.withApplyQuotesToAll(false)
.withOrderedResults(true)
.withMappingStrategy(mappingStrategy)
.build();
I hope it will helpful for someone
Here is the code to add support for @CsvBindByPosition
based ordering to default HeaderColumnNameMappingStrategy
. Tested for latest version 5.2
Approach is to store 2 map. First headerPositionMap
to store the position element so same can used to setColumnOrderOnWrite
, second columnMap
from which we can lookup actual column name rather than capitalized one
public class HeaderColumnNameWithPositionMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
protected Map<String, String> columnMap;
@Override
public void setType(Class<? extends T> type) throws CsvBadConverterException {
super.setType(type);
columnMap = new HashMap<>(this.getFieldMap().values().size());
Map<String, Integer> headerPositionMap = new HashMap<>(this.getFieldMap().values().size());
for (Field field : type.getDeclaredFields()) {
if (field.isAnnotationPresent(CsvBindByPosition.class) && field.isAnnotationPresent(CsvBindByName.class)) {
int position = field.getAnnotation(CsvBindByPosition.class).position();
String colName = "".equals(field.getAnnotation(CsvBindByName.class).column()) ? field.getName() : field.getAnnotation(CsvBindByName.class).column();
headerPositionMap.put(colName.toUpperCase().trim(), position);
columnMap.put(colName.toUpperCase().trim(), colName);
}
}
super.setColumnOrderOnWrite((String o1, String o2) -> {
if (!headerPositionMap.containsKey(o1) || !headerPositionMap.containsKey(o2)) {
return 0;
}
return headerPositionMap.get(o1) - headerPositionMap.get(o2);
});
}
@Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] headersRaw = super.generateHeader(bean);
return Arrays.stream(headersRaw).map(h -> columnMap.get(h)).toArray(String[]::new);
}
}
Great thread, I don't have any annotations in my pojo and this is how I did based on all the previous answers. Hope it helps others.
OpenCsv Version: 5.0 List readVendors = getFromMethod(); String[] fields= {"id","recordNumber","finVendorIdTb","finVenTechIdTb","finShortNameTb","finVenName1Tb","finVenName2Tb"};
String[] csvHeader= {"Id#","Shiv Record Number","Shiv Vendor Id","Shiva Tech Id#","finShortNameTb","finVenName1Tb","finVenName2Tb"};
CustomMappingStrategy<FinVendor> mappingStrategy = new CustomMappingStrategy(csvHeader);//csvHeader as per custom header irrespective of pojo field name
mappingStrategy.setType(FinVendor.class);
mappingStrategy.setColumnMapping(fields);//pojo mapping fields
StatefulBeanToCsv<FinVendor> beanToCsv = new StatefulBeanToCsvBuilder<FinVendor>(writer).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER).withMappingStrategy(mappingStrategy).build();
beanToCsv.write(readVendors);
//custom mapping class as mentioned in the thread by many users private static class CustomMappingStrategy extends ColumnPositionMappingStrategy {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
@Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.generateHeader(bean);
return header;
}
}
Output:
Id# Shiv Record Number Shiv Vendor Id Fin Tech Id# finShortNameTb finVenName1Tb finVenName2Tb finVenDefaultLocTb
1 VEN00053 678 33316025986 THE ssOHIO S_2 THE UNIVERSITY CHK Test
2 VEN02277 1217 3044374205 Fe3 MECHA_1 FR3INC EFT-1
3 VEN03118 1310 30234484121 PE333PECTUS_1 PER332CTUS AR EFT-1 Test
I wanted to achieve bi-directional import/export - to be able to import generated CSV back to POJO and visa versa.
I was not able to use @CsvBindByPosition for this, because in this case - ColumnPositionMappingStrategy was selected automatically. Per documents: this strategy requires that the file does NOT have a header.
HeaderColumnNameMappingStrategy
mappingStrategy.setColumnOrderOnWrite(Comparator<String> writeOrder)
CsvUtils to read/write csv
import com.opencsv.CSVWriter;
import com.opencsv.bean.*;
import org.springframework.web.multipart.MultipartFile;
import java.io.*;
import java.util.List;
public class CsvUtils {
private CsvUtils() {
}
public static <T> String convertToCsv(List<T> entitiesList, MappingStrategy<T> mappingStrategy) throws Exception {
try (Writer writer = new StringWriter()) {
StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)
.withMappingStrategy(mappingStrategy)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.build();
beanToCsv.write(entitiesList);
return writer.toString();
}
}
@SuppressWarnings("unchecked")
public static <T> List<T> convertFromCsv(MultipartFile file, Class clazz) throws IOException {
try (Reader reader = new BufferedReader(new InputStreamReader(file.getInputStream()))) {
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader).withType(clazz).build();
return csvToBean.parse();
}
}
}
POJO for import/export
public class LocalBusinessTrainingPairDTO {
//this is used for CSV columns ordering on exporting LocalBusinessTrainingPairs
public static final String[] FIELDS_ORDER = {"leftId", "leftName", "rightId", "rightName"};
@CsvBindByName(column = "leftId")
private int leftId;
@CsvBindByName(column = "leftName")
private String leftName;
@CsvBindByName(column = "rightId")
private int rightId;
@CsvBindByName(column = "rightName")
private String rightName;
// getters/setters omitted, do not forget to add them
}
Custom comparator for predefined String ordering:
public class OrderedComparatorIgnoringCase implements Comparator<String> {
private List<String> predefinedOrder;
public OrderedComparatorIgnoringCase(String[] predefinedOrder) {
this.predefinedOrder = new ArrayList<>();
for (String item : predefinedOrder) {
this.predefinedOrder.add(item.toLowerCase());
}
}
@Override
public int compare(String o1, String o2) {
return predefinedOrder.indexOf(o1.toLowerCase()) - predefinedOrder.indexOf(o2.toLowerCase());
}
}
Ordered writing for POJO (answer to initial question)
public static void main(String[] args) throws Exception {
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairsDTO = new ArrayList<>();
LocalBusinessTrainingPairDTO localBusinessTrainingPairDTO = new LocalBusinessTrainingPairDTO();
localBusinessTrainingPairDTO.setLeftId(1);
localBusinessTrainingPairDTO.setLeftName("leftName");
localBusinessTrainingPairDTO.setRightId(2);
localBusinessTrainingPairDTO.setRightName("rightName");
localBusinessTrainingPairsDTO.add(localBusinessTrainingPairDTO);
//Creating HeaderColumnNameMappingStrategy
HeaderColumnNameMappingStrategy<LocalBusinessTrainingPairDTO> mappingStrategy = new HeaderColumnNameMappingStrategy<>();
mappingStrategy.setType(LocalBusinessTrainingPairDTO.class);
//Setting predefined order using String comparator
mappingStrategy.setColumnOrderOnWrite(new OrderedComparatorIgnoringCase(LocalBusinessTrainingPairDTO.FIELDS_ORDER));
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
System.out.println(csv);
}
Read exported CSV back to POJO (addition to original answer)
Important: CSV can be unordered, as we are still using binding by name:
public static void main(String[] args) throws Exception {
//omitted code from writing
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
//Exported CSV should be compatible for further import
File temp = File.createTempFile("tempTrainingPairs", ".csv");
temp.deleteOnExit();
BufferedWriter bw = new BufferedWriter(new FileWriter(temp));
bw.write(csv);
bw.close();
MultipartFile multipartFile = new MockMultipartFile("tempTrainingPairs.csv", new FileInputStream(temp));
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairDTOList = convertFromCsv(multipartFile, LocalBusinessTrainingPairDTO.class);
}