I am reading lot of post for object detection using feature extraction (sift ecc).
After having calculate descriptors on both images, to get good matches they are using
You can't generally assume that the Eucludian distance will be used by your matcher. For instance, the BFMatcher supports different norms : L1, L2, Hamming...
You can check the documentation here for more details : http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
Anyway, all these distance measures are symmetric and it doesn't matter which one you use to answer your question.
And the answer is : calling knnMatch(A,B) is not the same as calling knnMatch(B,A).
If you don't trust me, I'll try to give you a graphical and intuitive explanation. I assume for the sake of simplicity that knn==1, so that for each queried descriptor, the algorithm will only find 1 correspondence (much easier to plot :-)
I randomly picked few 2D samples and created two data-sets (red & green). In the first plot, the greens are in the query data-set, meaning that for each green point, we try to find the closest red point (each arrow represents a correspondence).
In the second plot, the query & train data-sets has been swapped.
Finally, I also plotted the result of the crossCheckMatching() function which only conserve the bi-directional matches.

And as you can see, the crossCheckMatching()'s output is much better than each single knnMatch(X,Y) / knnMatch(Y,X) since only the strongest correspondence have been kept.