Motivation

Trustworthy personalization in the era of agentic AI

Personalization is everywhere. Understanding it isn’t.

Adaptive systems increasingly shape how people learn, decide, and interact—now accelerated by LLM-powered copilots, conversational recommenders, and multimodal interfaces. ExUM 2026 focuses on making these systems more transparent, explainable, and accountable—without losing what makes personalization effective.

Why explainability now

Modern personalization stacks combine behavioral traces, third-party signals, deep learning, and increasingly LLM-based components. When users cannot see what data was used, what assumptions were inferred, or why a recommendation/action happened, transparency becomes a prerequisite for trust.

Better decisions, not just clicks

Explanations should help people understand trade-offs, uncertainty, and alternatives—especially in high-impact domains.

Accountable pipelines

As architectures grow more complex (hybrid models, LLM modules), “black box” behavior becomes harder to audit and contest.

Human-centered evaluation

We need reproducible, user-facing evidence: understanding, perceived fairness, and meaningful control—not only offline metrics.

What ExUM 2026 focuses on

ExUM is a forum for theoretical, methodological, and empirical work that bridges personalization effectiveness with explainability. A key goal is comparable, robust evaluation and reusable resources (e.g., benchmarks, datasets, and shared challenges).

Research questions Core

  • What should explanations contain (data provenance, inferred assumptions, uncertainty, counterfactuals)?
  • When should systems explain (on demand, proactive, at critical decision points)?
  • How can interfaces support contestability and user agency without overwhelming users?
  • How do we evaluate explanations so results are reproducible and comparable?

Risks & responsibilities Societal

  • Privacy and data stewardship in user profiling and long-term memory.
  • Fairness and bias in personalization and group models.
  • Manipulation risks (over-persuasion, dark patterns, echo chambers).
  • Accountability expectations and “right to explanation” pressures.

Topics snapshot

The workshop welcomes contributions spanning transparent personalization strategies, explanation methods, explainable interfaces, and rigorous evaluation—plus open issues such as privacy and fairness.

Transparent personalization

  • Scrutable user models and transparent profiling.
  • Explainable adaptation methodologies and user-steerable preference elicitation.
  • Transparent strategies for groups (collaboration, team formation, group decisions).
  • Conversational recommenders and LLM-based assistants with transparent behavior.

Explanation & evaluation

  • Explanations based on item properties, user-generated content, and collaborative signals.
  • Explanations for opaque techniques (neural models, matrix factorization, generative AI, LLMs).
  • Human-centered studies, metrics, and protocols for transparency and explainability.
  • Benchmarks and reproducibility practices to enable meaningful comparison.

Get involved

If you work on explainable user modeling, trustworthy personalization, or evaluation of explanations, ExUM 2026 is a place to share methods, artifacts, and evidence—and to shape community practice as systems become more autonomous.