IEEE Internet Computing, 2024 (SCI-Expanded)
The rise of harmful online content underscores the urgent need for AI systems to effectively detect, filter those, and foster safer and healthier communication. This article introduces a novel approach to mitigate toxic content generation propensities of Large Language Models (LLMs) by fine-tuning them with a programmable stance-directed focus on core human values and common good. We propose a streamlined keyword coding and processing pipeline to generate weakly labeled data to train AI models that can avoid toxicity and champion civil discourse. We also developed a toxicity classifier and an Aspect-based Sentiment Analysis (ABSA) model to assess and control the effectiveness of a humanizing AI model. We evaluate the proposed pipeline using a contentious real-world Twitter dataset on U.S. race relations. Our approach successfully curbs the toxic content generation propensity of an unrestricted LLM by a significant 85%.