The Sign Language Recognition (SLR) Problem is a highly important research topic, because of its ability to increase the interaction between the people who are hearing-impaired or impediment in speech. We propose a simple but robust system. The proposed system consists of three main steps. First we apply segmentation to the face and hand region by using Fuzzy C-Means Clustering (FCM) and Thresholding. FCM is a clustering technique which employs fuzzy partitioning, in an iterative algorithm. After the face and hands are segmented, the feature vectors are extracted. The feature vectors are chosen among the low level features such as the bounding ellipse, bounding box, and center of mass coordinates, since they are known to be more robust to segmentation errors due to low resolution images. In total there are 23 features for each hand. After the feature vectors are extracted, they are used for recognition with discrete Hidden Markov Model (HMM). Recognition stage is composed of two stages, namely training and classification. The Baum Welch algorithm is used for HMM training. In classification part the likelihood of each HMM is calculated and the HMM with the highest likelihood is chosen. In order to measure the success rate of the system, the eNTERFACE dataset is used. In this dataset 8 different American Sign Language example classified and in user independent case, is shown to be working with 94.19% accuracy.