I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This t
these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it
You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.
Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.
I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.
SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log writes to disk. Transaction log records are kept in a buffer and written to disk when the buffer fills or a buffer flushing event takes place. Delayed transaction durability reduces both latency and contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected shutdown and an expected shutdown/restart of SQL Server. Like catastrophic events, you should plan for data loss. In a planned shutdown/restart some transactions that have not been written to disk may first be saved to disk, but you should not plan on it. Plan as though a shutdown/restart, whether planned or unplanned, loses the data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.
Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).