I am currently using this type of SQL on MySQL to insert multiple rows of values in one single query:
INSERT INTO `tbl` (`key1`,`key2`) VALUES (\'r1v1\',\'r1
For what it is worth, I have seen a lot of users recommend iterating through INSERT statements instead of building out as a single string query as the selected answer did. I decided to run a simple test with just two fields and a very basic insert statement:
prepare('INSERT INTO table (FirstName, LastName) VALUES (:fname, :lname)');
for($i = 1; $i <= 10; $i++ ) {
$stmt->bindParam(':fname', $fname);
$stmt->bindParam(':lname', $lname);
$stmt->execute();
$fname .= 'O';
$lname .= 'A';
}
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Completed in ". $time ." seconds
";
$fname2 = 'J';
$lname2 = 'M';
$time_start2 = microtime(true);
$qry = 'INSERT INTO table (FirstName, LastName) VALUES ';
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?), ";
$qry .= "(?,?)";
$stmt2 = $db->prepare($qry);
$values = array();
for($j = 1; $j<=10; $j++) {
$values2 = array($fname2, $lname2);
$values = array_merge($values,$values2);
$fname2 .= 'O';
$lname2 .= 'A';
}
$stmt2->execute($values);
$time_end2 = microtime(true);
$time2 = $time_end2 - $time_start2;
echo "Completed in ". $time2 ." seconds
";
?>
While the overall query itself took milliseconds or less, the latter (single string) query was consistently 8 times faster or more. If this was built out to say reflect an import of thousands of rows on many more columns, the difference could be enormous.