Is there a way to optimize this further or should I just be satisfied that it takes 9 seconds to count 11M rows ?
devuser@xcmst > mysql --user=user --pass
Since >'2009-10-11 15:33:22'
contains most of the records,
I would suggest to do a reverse matching like <'2009-10-11 15:33:22'
(mysql work less harder and less rows involved)
select
TABLE_ROWS -
(select count(*) from record_updates where add_date<"2009-10-11 15:33:22")
from information_schema.tables
where table_schema = "marctoxctransformation" and table_name="record_updates"
You can combine with programming language (like bash shell)
to make this calculation a bit smarter...
such as do execution plan first to calculate which comparison will use lesser row
From my testing (around 10M records), the normal comparison takes around 3s,
and now cut-down to around 0.25s