MS Access Batch Update via ADO.Net and COM Interoperability

一世执手 提交于 2019-11-30 07:26:35

The reason here is that the DAO driver sits much closer to the MS Access Database engine than the ODBC driver.

The DAO methods AddNew and Update delegate directly to MS Access equivalents, at no point does it generate SQL, so there's no SQL to be parsed by the MS Access.

On the other hand, the DataAdapter code generates an Update statement for each row, that update statement gets passed to ODBC, which then passes this to a MSAccess driver, which either

  1. independently parses the SQL and issues AddNew and Update commands to the Access database or
  2. passes the SQL to MS Access, which isn't optimised for parsing SQL, and which once parsed, ends up translating the SQL into AddNew and Update commands.

either way, your time is taken generating SQL and then having something interpret that SQL, where the DAO approach bypasses SQL generation / interpretation and goes straight to the metal.

One way around this is to create your own "database service" running on the machine with the access db. This marshals your selects & updates and could communicate with the client over remoting, WCF (http, or whatever). This is a lot of work and changes your application logic considerably.

Figuring out the correct name for the database driver (e.g. Jet or whatever) is an exercise left to the reader

I know this question is old but the answer may help someone still struggling with this.

There is another method to consider. As both source and target connection strings are known, source tables can be linked to the target Access database, possibly with some connection string parsing needed, via DAO or ADOX (I know, ADOX is off-topic here).
The data in tables so linked can then be transferred fairly quickly by issuing statements like this on a DAO or OleDb connection to the target Access database:

SELECT * INTO Table1 FROM _LINKED_Table1

Some drawbacks (please point out anything I missed):

  • source table must contain a primary key
  • primary keys and indexes have to be re-created by examining the source Indexes schema
  • not easily getting transfer progress status while the query is running

Some advantages (please point out anything I missed):

  • only having to examine the source Tables schema, if all user tables are to be copied
  • not having to examine the source Columns schema to generate column definitions for CREATE TABLE statements
    (for instance, try getting the AUTONUMBER / IDENTITY info reliably out of an OleDb schema, ie without assumptions about combinations of column values and flag bits based on examining other schemas)
  • not having to generate vast amounts of INSERT INTO ... VALUES ... statements, accounting for AUTONUMBER / IDENTITY columns in your code, or otherwise have a database operation run for each row in your code
  • being able to specify criteria to filter transferred records
  • not having to worry about text, date or time columns or how to delimit, escape or format their values in queries except when used in query criteria

This method was employed in a production project and turned out to be the quickest, for me at least. :o)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!