Availability of different imaging modalities requires techniques to process and combine information from different images of the same phenomena. We present a symmetry based approach for combining information from multiple images. Fusion is performed at data level. Actual object boundaries and shape descriptors are recovered directly from raw sensor output(s). Method is applicable to arbitrary number of images in arbitrary dimension.