I have two sets of images, {H} and {L}. {H} consists of 512x512 images. {L} consists of all of the images in {H}, but scaled down to 32x32-128x128 and with compression artif
Another, although maybe much slower approach is to do Clustering by Compression (Arxviv.org, PDF) and maybe use the JPEG coefficients as the model data to be compared instead of the uncompressed image data compressed by some other method of compression. Also see articles related to the first paper from Google Scholar.
Clustering by compression basically means compressing a file X using the (statistical) model from file Y and compare to the size to just compressing X with it’s own model’s data.
Here is some background about the idea of using different statistical models for compression. JPEG compression uses Huffman coding or Arithmetic coding to compress the DC coefficient tables.
Yet another option, which may be much faster if the smaller images are not just downsampled and/or cropped versions, is to use the SIFT or SURF algorithms as suggested by Wajih.