Workshop Details

By bringing together researchers and practitioners, the workshop aims to foster collaboration, knowledge exchange, and the development of novel solutions. Through this collective effort, we can advance the understanding and implementation of transparent and interpretable approaches in adaptive and personalized systems, including integrating Language Models (LLMs) to enhance transparency and enable users to comprehend the internal mechanisms guiding these systems.


Adaptive and personalized systems increasingly mediate everyday digital experiences, and recent advances in LLMs, NLP, and Generative AI have amplified their reach, from intelligent user interfaces and conversational agents to AR/immersive interactions and autonomous assistants. These methods now support applications in health and well-being, behavior change and persuasion, e-learning and educational games, and group modeling for collaboration and team formation, all of which require increasingly rich and dynamic user models. At the same time, modern pipelines based on data mining, knowledge graphs/linked data, semantic representations, and affective computing raise urgent questions about transparency, privacy, fairness, accountability, and user understanding, reinforced by regulatory expectations such as the GDPR right to explanation. Yet research often optimizes personalization performance without comparable attention to interpretability and human comprehension. This workshop provides a forum for theoretical, methodological, and empirical work that bridges effectiveness and explainability, with emphasis on robust human-centered evaluation and reproducible practices, including benchmarks, datasets, and shared challenges that advance trustworthy personalization in an era of increasingly autonomous AI.



Topics of interests include but are not limited to:


  • Transparent and Explainable Personalization Strategies:
    • Scrutable User Models
    • Transparent User Profiling and Personal Data Extraction
    • Explainable Personalization and Adaptation Methodologies
    • Novel strategies (e.g., conversational recommender systems) for building transparent algorithms
    • Transparent Personalization and Adaptation to Groups of users

  • Designing Explanation Algorithms:
    • Explanation algorithms based on item description and item properties
    • Explanation algorithms based on user-generated content (e.g., reviews)
    • Explanation algorithms based on collaborative information
    • Building explanation algorithms for opaque personalization techniques (e.g., LLMs, neural networks, matrix factorization, generative AI)
    • Explanation algorithms based on methods to build group models

  • Designing Transparent and Explainable User Interfaces:
    • Transparent User Interfaces
    • Designing Transparent Interaction methodologies
    • Novel paradigms (e.g., chatbots) for building transparent models

  • Evaluating Transparency and Explainability:
    • Evaluating Transparency in interaction or personalization
    • Evaluating Explainability of the algorithms
    • Designing User Studies for evaluating transparency and explainability
    • Novel metrics and experimental protocols

  • Open Issues in Transparent and Explainable User Models and Personalized Systems:
    • Ethical issues (Fairness and Biases) in User / Group Models and Personalized Systems
    • Privacy management of Personal and Social data