In our production database, we ran the following pseudo-code SQL batch query running every hour:
INSERT INTO TemporaryTable
(SELECT FROM HighlyContentiou
The reason for the lock (readlock) is to secure your reading transaction not to read "dirty" data a parallel transaction might be currently writing. Most DBMS offer the setting that users can set and revoke read & write locks manually. This might be interesting for you if reading dirty data is not a problem in your case.
I think there is no secure way to read from a table without any locks in a DBS with multiple transactions.
But the following is some brainstorming:
if space is no issue, you can think about running two instances of the same table. HighlyContentiousTableInInnoDb2 for your constantly read/write transaction and a HighlyContentiousTableInInnoDb2_shadow for your batched access.
Maybe you can fill the shadow table automated via trigger/routines inside your DBMS, which is faster and smarter that an additional write transaction everywhere.
Another idea is the question: do all transactions need to access the whole table? Otherwise you could use views to lock only necessary columns. If the continuous access and your batched access are disjoint regarding columns, it might be possible that they don't lock each other!