ReactDiff: Fundamental Multiple Appropriate Facial Reaction Diffusion Model

ACM MM 2025
ReactDiff online generates appropriate facial reactions with natural expressions and the human pace of expression changes in response to speaker behaviour.

Abstract

The automatic generation of diverse and human-like facial reactions in dyadic dialogue remains a critical challenge for human-computer interaction systems. Existing methods fail to model the stochasticity and dynamics inherent in real human reactions. To address this, we propose ReactDiff, a novel temporal diffusion framework for generating diverse facial reactions appropriate for a given dialogue context. Our key insight is that plausible human reactions demonstrate smoothness, and coherence over time, and conform to constraints imposed by human facial anatomy. To achieve this, ReactDiff incorporates two vital constraints into the diffusion process: i) facial motion velocity priors and ii) facial action unit dependencies. These constraints guide the model toward realistic human reaction manifolds, avoiding visually unrealistic jitters, unstable transitions, unnatural expressions, and other artifacts. Extensive experiments on the REACT2024 dataset demonstrate that our approach not only achieves state-of-the-art reaction quality but also excels in diversity and reaction appropriateness. Our code will be made publicly available.

Results

Generated Facial Reactions to Different Speaker Behaviours

Multiple Facial Reaction Samples to the Given Speaker Behaviour

ReactDiff online generates different reactions (reaction 1, 2,3) to the given speaker behaviour.

Comparison with the state-of-the-art