© 2021 IEEE.Word embeddings have become the de-facto tool for representing text in natural language processing (NLP) tasks, as they can capture semantic and syntactic relations, unlike their precedents such as Bag-of-Words. Although word embeddings have been employed in various studies in recent years and proven to be effective in many NLP tasks, they are still immature for sentiment analysis, as they suffer from insufficient sentiment information. General word embedding models pre-trained on large corpora with methods such as Word2Vec or GloVe achieve limited success in domain-specific NLP tasks. On the other hand, training domain-specific word embeddings from scratch requires a high amount of data and computation power. In this work, we target both shortcomings of pre-trained word embeddings to boost the performance of domain-specific sentiment analysis tasks. We propose a model that refines pre-trained word embeddings with context information and leverages the sentiment scores of sentences obtained from a lexicon-based method to further improve performance. Experiment results on two benchmark datasets show that the proposed method significantly increases the accuracy of sentiment classification.