Apparent personality prediction from speech using expert features and wav2vec 2.0

Authors: R. Barchi, L. Pepino, L. Gauder, L. Estienne, M. Meza, P. Riera, L. Ferrer.

Abstract:
Studies have shown that virtual assistants that adapt to the personality of the speaker are more effective and improve the overall experience of the user. For this reason, automatic detection of a user’s personality has recently become a task of interest. In this work, we explore the task of detecting a person’s personality using their speech. To this end, we use the “First impressions Dataset” consisting of videos annotated with apparent personality labels. We train various systems using different modeling techniques and features extracted from the speech recordings including expert features commonly used for emotion recognition, and self-supervised representations given by wav2vec 2.0. We analyze the importance of each of these feature sets and relevant subsets for predicting the “Big-five” personality traits.
Our results show that wav2vec 2.0 features are the most useful ones, and that their combination with expert features can result in additional gains.

More information:
https://www.isca-archive.org/smm_2023/barchi23_smm.html

2024-02-07T09:55:54-03:00 7/February/2024|Papers|
Go to Top