Automated Evaluation of Taste Preference using Dynamics of Facial Expressions

Authors

  • K.Venkateswara Rao Assistant Professor, Department of CSE-AI & ML, Sri Vasavi Institute of Engineering & Technology, Nandamuru, Andhra Pradesh, India Author
  • Veeramallu Santhi Prasanna UG Student, Department of CSE-AI & ML, Sri Vasavi Institute of Engineering & Technology, Nandamuru, Andhra Pradesh, India Author
  • Konakalla Dundi Ganapathi UG Student, Department of CSE-AI & ML, Sri Vasavi Institute of Engineering & Technology, Nandamuru, Andhra Pradesh, India Author
  • N.Naga Kanaka Mahalakshmi UG Student, Department of CSE-AI & ML, Sri Vasavi Institute of Engineering & Technology, Nandamuru, Andhra Pradesh, India Author
  • Peddiboyina Rama Kondala Rao UG Student, Department of CSE-AI & ML, Sri Vasavi Institute of Engineering & Technology, Nandamuru, Andhra Pradesh, India Author

Keywords:

Taste Liking, Taste Appreciation, Facial Expression Dynamics, Spontaneous Expression, Taste-Induced Expression

Abstract

The level of taste liking is an important measure for a number of applications such as the prediction of long-term consumer acceptance for different food and beverage products. Based on the fact that facial expressions are spontaneous, instant and heterogeneous sources of information, this paper aims to automatically estimate the level of taste liking through facial expression videos. Instead of using handcrafted features, the proposed approach deep learns the regional expression dynamics, and encodes them to a Fisher vector for video representation. Regional Fisher vectors are then concatenated, and classified by linear SVM classifiers. The aim is to reveal the hidden patterns of taste-elicited responses by exploiting expression dynamics such as the speed and acceleration of facial movements. To this end, we have collected the first large-scale beverage tasting database in the literature. The database has 2970 videos of taste-induced facial expressions collected from 495 subjects. Our large-scale experiments on this database show that the proposed approach achieves an accuracy of 70.37% for distinguishing between three levels of taste-liking. Furthermore, we assess the human performance recruiting 45 participants, and show that humans are significantly less reliable for estimating taste appreciation from facial expressions in comparison to the proposed method. Index Terms—Taste liking, taste appreciation, facial expression dynamics, spontaneous expression, taste-induced expression.

Downloads

Download data is not yet available.

References

R. Weiland, H. Ellgring, and M. Macht, “Gustofacial and olfactofacial responses in human adults,” Chemical senses, vol. 35, no. 9, pp. 841–853, 2010.

K. Wendin, B. H. Allesen-Holm, and W. L. Bredie, “Do facial reactions add new dimensions to measuring sensory responses to basic tastes?” Food quality and preference, vol. 22, no. 4, pp. 346–354, 2011.

P. Ekman and D. Keltner, “Universal facial expressions of emotion,” California Mental Health Research Digest, vol. 8, no. 4, pp. 151–158, 1970.

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2010, pp. 94–101.

J. R. Ganchrow, J. E. Steiner, and M. Daher, “Neonatal facial expressions in response to different qualities and intensities of gustatory stimuli,” Infant Behavior and Development, vol. 6, no. 2, pp. 189–200, 1983.

N. A. Fox and R. J. Davidson, “Taste-elicited changes in facial signs of emotion and the asymmetry of brain electrical activity in human newborns,” Neuropsychologia, vol. 24, no. 3, pp. 417–422, 1986.

D. Rosenstein and H. Oster, “Differential facial responses to four basic tastes in newborns,” Child development, vol. 59, no. 6, pp. 1555–1568, 1988.

E. Greimel, M. Macht, E. Krumhuber, and H. Ellgring, “Facial and affective reactions to tastes and their modulation by sadness and joy,” Physiology & Behavior, vol. 89, no. 2, pp. 261–269, 2006.

R. A. de Wijk, W. He, M. G. Mensink, R. H. Verhoeven, and C. de Graaf, “Ans responses and facial expressions differentiate between the taste of commercial breakfast drinks,” PLoS ONE, vol. 9, no. 4, p. e93823, 2014.

G. G. Zeinstra, M. Koelen, D. Colindres, F. Kok, and C. De Graaf, “Facial expressions in school-aged children are a good indicator of ‘dislikes’, but not of ‘likes’,” Food Quality and Preference, vol. 20, no. 8, pp. 620–624, 2009.

P. Ekman and W. V. Friesen, Facial action coding system. Palo Alto,CA: Consulting Psychologists Press, 1977.

L. Danner, L. Sidorkina, M. Joechl, and K. Duerrschmid, “Make a face! implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology,” Food Quality and Preference, vol. 32, pp. 167–172, 2014.

R. A. de Wijk, V. Kooijman, R. H. Verhoeven, N. T. Holthuysen, and C. de Graaf, “Autonomic nervous system responses on and facial expressions to the sight, smell, and taste of liked and disliked foods,” Food Quality and Preference, vol. 26, no. 2, pp. 196–203, 2012.

S. C. King and H. L. Meiselman, “Development of a method to measure consumer emotions associated with foods,” Food Quality and Preference, vol. 21, no. 2, pp. 168–177, 2010.

S. Jaiswal and M. Valstar, “Deep learning the dynamic appearance and shape of facial action units,” in IEEE Winter Conference on Applications of Computer Vision, 2016, pp. 1–8.

I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial expres sion recognition from video sequences: Temporal and static modeling,” Computer Vision and image understanding, vol. 91, no. 1, pp. 160–187, 2003.

M. Liu, S. Li, S. Shan, R. Wang, and X. Chen, “Deeply learning deformable facial action parts model for dynamic expression analysis,” in Asian Conference on Computer Vision, 2014, pp. 143–157.

J. F. Cohn and K. L. Schmidt, “The timing of facial motion in posed and spontaneous smiles,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 2, no. 02, pp. 121–132, 2004.

H. Dibeklioglu, A. A. Salah, and T. Gevers, “Recognition of genuine ˘ smiles,” IEEE Transactions on Multimedia, vol. 17, no. 3, pp. 279–294, 2015.

H. Dibeklioglu, F. Alnajar, A. Ali Salah, and T. Gevers, “Combining ˘ facial dynamics with appearance for age estimation,” IEEE Transactions on Image Processing, vol. 24, no. 6, pp. 1928–1943, 2015.

H. Dibeklioglu, A. Ali Salah, and T. Gevers, “Like father, like son: Facial ˘ expression dynamics for kinship verification,” in IEEE International Conference on Computer Vision, 2013, pp. 1497–1504.

E. Boutellaa, M. B. Lopez, S. Ait-Aoudia, X. Feng, and A. Hadid, ´ “Kinship verification from videos using spatio-temporal texture features and deep learning,” in International Conference on Biometrics, 2016, pp. 1–7.

T. Pfister, X. Li, G. Zhao, and M. Pietikainen, “Differentiating sponta neous from posed facial expressions within a generic facial expression recognition framework,” in International Conference on Computer Vision Workshops, 2011, pp. 868–875.

Downloads

Published

26-04-2024

Issue

Section

Research Articles

How to Cite

Automated Evaluation of Taste Preference using Dynamics of Facial Expressions. (2024). International Journal of Scientific Research in Science and Technology, 11(2), 894-902. https://ijsrst.com/index.php/home/article/view/IJSRST24112151

Similar Articles

You may also start an advanced similarity search for this article.