A few things you could consider:
- denormalisation. Reduce the number of joins required by denormalising your data structure
- partitioning. Can you partition data from large tables? e.g. a large table, could perform better if partitioned into a number of smaller tables. Enterprise Edition from SQL 2005 onwards has good support for partitioning, see here. Would consider this if you start getting in the realms of 10s/100s of millions of rows
- index management/statistics. Are all indexes defragged? Are statistics up to date?