I have a list L of points (x, y) and the usual euclidean distance measure
Assuming you have a uniform distribution of the points you can do the following thing:
Find max_x and min_x being the maximum and minimum X coordinates - (O(n)). Those value should help you pick a constant k as the "best" one for the current set of points. Different value of k will influence only the complexity of the algorithm.
Consider a new data structure which is matrix like and is a vector of vectors or vector of linked lists, lets name it structure where structure[i] is the corresponding vector/linked lists (as described above). Populate this data structure as follows: structure[i] should contain points that have their x coordinate being in the range of [max_x+ik,max_x+(i+1)k] this will take another O(n) time and O(n) extra space. Now you sort every entry of structure[i] by y coordinate. Having this done it is enough to compute the distances (brute force) between the following set of points: structure[0], structure[structure.length()-1], the extremes (entry at first and last index) of every other structure[i].
Basically this is almost the same as doing the convex hull and starting to compute the distances of the points that are on the hull, the difference is that picking the right k might either make it faster or slower. Having worst case complexity O(n^2) and best case complexity O(nLg(n)). Where k will influence the trade of either sorting bigger groups of points or having more points to compute the distances between.