Direct Classification of Emotional Intensity

Automatic recognition of emotional intensity can be useful in diagnosing mental illnesses or detecting drowsy drivers. Current models rely on certain facial muscle activations to detect the emotion and its intensity from an image.

A recent study on arXiv.org proposes a method to evaluate the intensity of happiness directly from video input. It enables one to look for changes in facial color and movement together with muscle activations.

Image credit: Pikrepo, free licence

In order to generalize the model to new faces more easily, it uses adaptive learning. A neutral face and a smile are provided from each new subject. The model is trained on neutral faces, and the smile dataset is used for inference. The use of adaptive learning improves the accuracy of the model by approximately 15 percent. In the future, the model could be further extended to other emotions like sadness or anger, which are harder to label.

In this paper, we present a model that can directly predict emotion intensity score from video inputs, instead of deriving from action units. Using a 3d DNN incorporated with dynamic emotion information, we train a model using videos of different people smiling that outputs an intensity score from 0-10. Each video is labeled framewise using a normalized action-unit based intensity score. Our model then employs an adaptive learning technique to improve performance when dealing with new subjects. Compared to other models, our model excels in generalization between different people as well as provides a new framework to directly classify emotional intensity.

Link: https://arxiv.org/abs/2011.07460

Source