As many others have already stated, profiling is your first step.
I would add that focusing on data structures and algorithms as a first step is generally more profitable than diving straight into micro-optimizations.
For one thing, the compiler will normally perform a lot of the classic optimization tricks for you anyway (and often better than you). This is especially true in more modern languages such as C# than it is in older, less constrained, languages C as the compiler has more knowledge of program structure; worse still, confusing your code by "optimizing" it can actually make it harder for the compiler to apply its own, more efficient, optimizations.
Mostly though, there is just a lot more scope to improve things when you start improving the big-oh of your operations. For example, searching a linked list is O(n); meaning the time taken to search it increases at the same rate as the size of the data stored in it. Searching a hashtable is only O(1), so you can add more data without increasing the time to search it (there are other factors when we leave the theoretical world so this isn't quite true, but it is true enough most of the time).
Messing with your loops so they run from high to low so the generated code can save a couple of clock cycles with a JumpIfZero rather than a JumpIfLessThan is probably not going to have quite the same degree of impact!