33rd IEEE Conference on Signal Processing and Communications Applications, SIU 2025, İstanbul, Türkiye, 25 - 28 Haziran 2025, (Tam Metin Bildiri)
With the development of large language models, natural language tasks have seen significant improvements, including personalized recommendation. Traditional approaches in recommendation are often based on collaborative filtering, which relies on historical interactions of similar users. These methods struggle with cold-start and data sparsity issues. Cross-domain recommendation systems try to tackle these problems by leveraging knowledge from a richer domain to increase recommendation performance. However, this is a tedious task as it needs to correlate between two different domain knowledge. Pre-trained LLMs (Large Language Models), on the other hand, can tackle these problems thanks to their parametric knowledge and the ability to generate rich representations of user preferences and contextual information. This article analyzes the use of pre-trained LLMs relying on the parametric knowledge and in-context learning for Click-Through Rate (CTR) prediction.