I'm just thinking out loud here, but chances are performance is I/O bound and not CPU bound. In any case, I'm wondering if interpreting the file as text may be slowing things down as it will have to convert between the file's encoding and string's native encoding. If you know the encoding is ASCII or compatible with ASCII, you might be able to get away with just counting the number of times a byte with the value 10 appears (which is the character code for a linefeed).
What if you had the following:
FileStream fs = new FileStream("path.txt", FileMode.Open, FileAccess.Read, FileShare.None, 1024 * 1024);
long lineCount = 0;
byte[] buffer = new byte[1024 * 1024];
int bytesRead;
do
{
bytesRead = fs.Read(buffer, 0, buffer.Length);
for (int i = 0; i < bytesRead; i++)
if (buffer[i] == '\n')
lineCount++;
}
while (bytesRead > 0);
My benchmark results for 1.5GB text file, timed 10 times, averaged:
StreamReader approach, 4.69 seconds
File.ReadLines().Count() approach, 4.54 seconds
FileStream approach, 1.46 seconds