Publication Date:
Author(s): Anthony B. Ciston, Carina Forster, Timothy R. Brick, Simone Kühn, Julius Verrel, Elisa Filevich
Publisher: Elsevier
Publication Type: Academic Journal Article
Journal Title: Cognition
Volume: 225
Abstract:

As humans we communicate important information through fine nuances in our facial expressions, but because conscious motor representations are noisy, we might not be able to report these fine movements. Here we measured the precision of the explicit metacognitive information that young adults have about their own facial expressions. Participants imitated pictures of themselves making facial expressions and triggered a camera to take a picture of them while doing so. They then rated how well they thought they imitated each expression. We defined metacognitive access to facial expressions as the relationship between objective performance (how well the two pictures matched) and subjective performance ratings. As a group, participants' metacognitive confidence ratings were only about four times less precise than their own similarity ratings. In turn, machine learning analyses revealed that participants' performance ratings were based on idiosyncratic subsets of features. We conclude that metacognitive access to one's own facial expressions is only partial.