Computational Communication Science| Automated Coding of Televised Leader Displays: Detecting Nonverbal Political Behavior With Computer Vision and Deep Learning

Jungseock Joo, Erik P. Bucy, Claudia Seidel


For decades, nonverbal communication scholars have employed manual coding as the primary research methodology for systematic content analysis of nonverbal behaviors such as facial expressions and gestures. Manual coding of visual data, however, is expensive and time consuming and therefore not suitable for studies relying on large-scale data. This article introduces a novel computational methodology that can automatically analyze visual content of human communication from visual data. Based on computer vision techniques, the method allows to automatic detection and classification of diverse facial expressions and communicative gestures that have been manually coded in traditional work. To demonstrate the new method, we develop a computational pipeline to classify fine-grained facial expressions and physical gestures and apply our technique to the first 2016 U.S. presidential debates between Donald Trump and Hillary Clinton. The results confirm that computational methods can replicate human coding with a high degree of accuracy for bodily movements and facial expressions, as well as nonverbal tics and signature displays unique to individual candidates. Automated coding should soon facilitate rapid progress in quantitative visual communication research by dramatically scaling up existing manual studies.



computational communication science, computer vision, deep learning, nonverbal behavior, facial expressions, gestures, 2016 presidential debates

Full Text: