I have an application which potentially does thousands of inserts to a SQL Server 2005 database. If an insert fails for any reason (foreign key constraint, field length, etc
Difficult to help without seeing code. I assume from your description you are using a transaction to commit after every N inserts, which will improve performance vs committing each insert provided N is not too big.
But the downside is: if an insert fails, any other inserts within the current batch of N will be rolled back when you rollback the transaction.
In general you should dispose a transaction before closing the connection (which will rollback the transaction if it hasn't been committed). The usual pattern looks something like the following:
using(SqlConnection connection = ...)
{
connection.Open();
using(SqlTransaction transaction = connection.BeginTransaction())
{
... do stuff ...
transaction.Commit(); // commit if all is successful
} // transaction.Dispose will be called here and will rollback if not committed
} // connection.Dispose called here
Please post code if you need more help.
Well, first IMO, you shouldn't expect your app to deal with "hard" errors like these. Your application should understand what these business rules are and account for them. DON'T force your database to be the business rule, or the constraint rule cop. It should only be a data rule cop, and at that, be graceful about telling the interface with RETURNs and other methods. Schema stuff shouldn't be force up that far.
On to answer your question, I suspect, without seeing any of your code, that you are trying to do a commit when an error has occured, and you don't know it yet. This is the reasoning behind my first statement ... trying to trap for the error the database gives, without having your application understand and participate in these rules, you're giving yourself a headache.
Try this. Insert rows that won't fail with a database constraint, or other errors and process the inserts with a commit. See what happens. Then shove in some records that will fail, process a commit and see if you get your lovely error. Third, run the errors again, do a forced rollback, and see if you succeed.
Just some ideas. Again, as a summary, I think it has to do with not trapping certain errors from the database in a graceful way for things that are "hard" errors and expecting the front end to deal with them. Again, my expert opinion, NOPE, don't do it. It's a mess. Your app needs to overlap in knowledge about the rules on the back. Those things are in place just to make sure this doesn't happen during testing, and the occasional bug that surfaces like this, to then put in the front it to handle the forgotten lookup to a foreign key table set.
Hope it helps.
My problem was similar, and it turned out that I was doing it to myself. I was running a bunch of scripts in a VS2008 project to create stored procedures. In one of the procs, I used a transaction. The script was executing the creation of the proc inside a transaction, rather than including the transaction code as part of the procedure. So when it got to the end without an error, it committed the transaction for the script. The .Net code was also using and then committing a transaction, and the zombie effect came when it tried to close the transaction that had already been closed inside the SQL script. I removed the transaction from the SQL script, relying on the transaction opened and checked in the .Net code, and this solved the problem.