Dokuz Eylül Üniversitesi Mühendislik Fakültesi Fen ve Mühendislik Dergisi, vol.28, no.82, pp.113-120, 2026 (TRDizin)
Natural Language Processing (NLP) has become a cornerstone in various fields, revolutionizing how machines interpret and process human language. Among its diverse applications, next-word prediction emerges as a highly practical and impactful example of generative AI. This research focuses on the use of Long Short-Term Memory (LSTM) models—an innovative class of Recurrent Neural Network (RNN)—for predictive text generation. LSTMs excel in capturing sequential and contextual information, making them ideal for language tasks. While transformer models dominate accuracy benchmarks, this work addresses the critical need for efficient alternatives in resource-constrained deployment scenarios. This study presents a novel LSTM-based framework enhanced with hybrid architecture and advanced regularization techniques, trained on a carefully curated dataset of 15,000 English sentences. The proposed model achieves superior performance with 84.2% training accuracy, 79.6% test accuracy, and a perplexity score of 2.41, significantly outperforming traditional approaches. The methodology addresses overfitting through dropout regularization, batch normalization, and adaptive learning rate strategies while effectively capturing long-term contextual dependencies. This research contributes to the advancement of neural language modeling by providing a robust framework that bridges the gap between computational efficiency and prediction accuracy in real-world NLP applications.