There is more than one way to do it.
- DBD::CSV - Use SQL to read and write CSV files.
- Text::CSV - Build and parse CSV files a line at a time. This is pretty much the - gold standard for CSV manipulation.
- POE::Filter::CSV - Provides a CSV filter for your POE component IO.
- Data::Tabular::Dumper::CSV - Dump a table directly into a CSV file (objects with the same interface can dump to XML or MS Excel files).
There are many others on CPAN, too.
Of course, you could ignore all this and just use a loop, a join and a print:
my @table = (
[ 'a', 'b', 'c'],
[ 1, 2, 3 ],
[ 2, 4, 6 ],
);
for my $row ( @table ) {
print join( ',', @$row ), "\n";
}
Just run your script and redirect output to a file, bang! you are done. Or you could get crazy and open a file handle, and print directly to a file.
But then you won't handle multi-line fields, or embedded commas, or any of the many other oddities that can pop up in this sort of data processing task. So be careful with this approach.
My recommendations:
- If you are building a POE
application, use POE::Filter.
- If you are a SQL/DBI monster and dream in
SQL, or may need to replace CSV
output with a real database
connection, use DBD::SQL.
- If you have
simple data and are cobbling together
a cruddy little throw-away script to
format some data for your spreadsheet
to digest, use print and join--do this for no code with an expected life span of greater than 2 hours.
- If you
want to dump a blob of data into a
CSV or XML use
Data::Tabular::Dumper::CSV.
- If you are writing something that needs to be stable, maintainable and fast, and you need maximum control for input and output, use Text::CSV. (Note that POE::Filter::CSV, Data::Tabular::Dumper::CSV, and DBD::CSV all use Text::CSV or Text::CSV_XS as for back-end processing).