In this work, MELP (Mixed Excitation Linear Prediction) speech coding algorithm has been used for speech conversion. Speech conversion aims to modify the speech of one speaker such that the modified speech sounds as if spoken by another speaker. Speech modeling of MELP has been used to derive a mapping the between the speech models of the two speakers. We have obtained a mapping which provides a context-free speech conversion. We have mainly considered the spectral properties of the speakers. Using the 230 sentences of the two speakers, a mapping between the 4-stage vector quantization indexes for line spectral frequencies (LSF's) of the two speakers have been obtained. Two different methods have been proposed to obtain a codebook for the second speaker from this mapping and both have been applied in additon to pitch modification during synthesis. The first method replaces the LSF index of the first speaker with that of the second speaker, which appears the most, during training. The second method uses the weighted average from die histogram of the second speaker that corresponds to the index of the first speaker, to form a new LSF codebook for the second speaker.