Using a list of 10 million random int
s (same seed each time, average of 10 repetitions):
listCopy.Sort(Comparer
The reason is readily visible in the Reference Source, system/array.cs source code file:
[ReliabilityContract(Consistency.MayCorruptInstance, Cer.MayFail)]
public static void Sort(T[] array, int index, int length, System.Collections.Generic.IComparer comparer) {
// Argument checking code omitted
//...
if (length > 1) {
//
// TrySZSort is still faster than the generic implementation.
// The reason is Int32.CompareTo is still expensive than just using "<" or ">".
//
if ( comparer == null || comparer == Comparer.Default ) {
if(TrySZSort(array, null, index, index + length - 1)) {
return;
}
}
ArraySortHelper.Default.Sort(array, index, length, comparer);
}
}
The comment marked by
explains it, in spite of its broken English :) The code path for the default comparer goes through TrySZSort(), a function that's implemented in the CLR and written in C++. You can get its source code from SSCLI20, it is implemented in clr/src/vm/comarrayhelpers.cpp. It uses a template class method named ArrayHelpers
.
It gets the speed advantage from being able to use the <
operator, a single cpu instruction instead of the 10 required by Int32.CompareTo(). Or in other words, IComparable<>.CompareTo is over-specified for simple sorting.
It is a micro-optimization, the .NET Framework has lots and lots of them. The inevitable fate of code that sits at the very bottom of a dependency chain, Microsoft can never assume that their code isn't going to be speed-critical in a customer's app.