Local learning of sparse image models has proved to be very effective to solve inverse problems in many computer vision applications. To learn such models, the data samples are often clustered using the K-means algorithm with the Euclidean distance as a dissimilarity metric. However, the Euclidean distance may not always be a good dissimilarity measure for comparing data samples lying on a manifold. In this paper, we propose two algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we consider the underlying geometry of the data. The first algorithm, called adaptive geometry-driven nearest neighbor search (AGNN), is an adaptive scheme, which can be seen as an out-of-sample extension of the replicator graph clustering method for local model learning. The second method, called geometry-driven overlapping clusters (GOCs), is a less complex nonadaptive alternative for training subset selection. The proposed AGNN and GOC methods are evaluated in image superresolution and shown to outperform spectral clustering, soft clustering, and geodesic distance-based subset selection in most settings.