Investigating sentiments in Brazilian and German Blogs
DOI:
https://doi.org/10.5753/jbcs.2022.2214Keywords:
Cross-media sentiment analysis, Corpus, Emotions in images and texts, Face detectionAbstract
Social interactions have changed in recent years. People post their thoughts, opinions and sentiments on social media platforms more often, through images and videos, providing a very rich source of data about population of different countries, communities, etc. Due to the increase in the amount of data on the internet, it becomes impossible to perform any analysis in a manual manner, requiring the automation of the process. In this work, we use two blog corpora that contain images and texts. Cross-Media German Blog (CGB) corpus consists of German blog posts, while Cross-Media Brazilian Blog (CBB) contains Brazilian blog posts. Both blogs have the Ground Truth (GT) of images and texts feelings (sentiments), classified according to human perceptions. In previous work, Machine Learning and lexicons technologies were applied to both corpora to detect the sentiments (negative, neutral or positive) of images and texts and compare the results with ground truth (based on subjects perception). In this work, we investigated a new hypothesis, by detecting faces and their emotions, to improve the sentiment classification accuracy in both CBB and CGB datasets. We use two methodologies to detect polarity on the faces and evaluated the results with the images GT and the multimodal GT (the complete blog using text and image). Our results indicate that the facial emotion can be a relevant feature in the classification of blogs sentiment.
Downloads
References
Abbas, M., Memon, K. A., Jamali, A. A., Memon, S., and Ahmed, A. (2019). Multinomial naive bayes classification model for sentiment analysis. IJCSNS, 19(3):62.
Baltrusaitis, T., Zadeh, A., Lim, Y. C., and Morency, L.-P. (2018). Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 59-66. IEEE.
Borth, D., Chen, T., Ji, R., and Chang, S.-F. (2013). Sentibank: large-scale ontology and classifiers for detecting sentiment and emotions in visual content. In Proceedings of the 21st ACM international conference on Multimedia, pages 459-460.
Camras, L. (1980). Emotion: a psychoevolutionary synthesis.
Carvalho, P. and Silva, M. J. (2015). Sentilex-pt: Principais características e potencialidades. Oslo Studies in Language, 7(1).
Chen, T., Borth, D., Darrell, T., and Chang, S.-F. (2014a). Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks. arXiv preprint arXiv:1410.8586.
Chen, T., Yu, F. X., Chen, J., Cui, Y., Chen, Y.-Y., and Chang, S.-F. (2014b). Object-based visual sentiment concept analysis and application. In Proceedings of the 22nd ACM international conference on Multimedia, pages 367-376.
Cohn, J. F., Ambadar, Z., and Ekman, P. (2007). Observer-based measurement of facial expression with the facial action coding system. The handbook of emotion elicitation and assessment, 1(3):203-221.
Dal Molin, G. P., Santos, H. D., Manssour, I. H., Vieira, R., and Musse, S. R. (2019). Cross-media sentiment analysis in brazilian blogs. In International Symposium on Visual Computing, pages 492-503. Springer.
dos Santos, H. D. P., Woloszyn, V., and Vieira, R. (2018). Blogset-br: A brazilian portuguese blog corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).
Hanjalic, A. (2006). Extracting moods from pictures and sounds: Towards truly personalized tv. IEEE Signal Processing Magazine, 23(2):90-100.
Harabagiu, S., Hickl, A., and Lacatusu, F. (2006). Negation, contrast and contradiction in text processing. In AAAI, volume 6, pages 755-762.
Hofstede, G. (2001). Culture's consequences: Comparing values, behaviors, institutions and organizations across nations. Sage publications.
Hossain, N., Jahan, R., and Tunka, T. T. (2018). Emotion detection from voice based classified frame-energy signal using k-means clustering. Int. J. Softw. Eng. Appl., 9(4):37-44.
Islam, J. and Zhang, Y. (2016). Visual sentiment analysis for social images using transfer learning approach. In 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom)(BDCloud-SocialCom-SustainCom), pages 124-130. IEEE.
Li, L., Qin, B., and Liu, T. (2017). Contradiction detection with contradiction-specific word embedding. Algorithms, 10(2):59.
Meier, T., Boyd, R. L., Pennebaker, J. W., Mehl, M. R., Martin, M., Wolf, M., and Horn, A. B. (2019). “liwc auf deutsch”: The development, psychometrics, and introduction of de-liwc2015. PsyArXiv, (a).
Morency, L.-P., Mihalcea, R., and Doshi, P. (2011). Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th international conference on multimodal interfaces, pages 169-176.
Nadeeshani, M., Jayaweera, A., and Samarasinghe, P. (2020). Facial emotion prediction through action units and deep learning. In 2020 2nd International Conference on Advancements in Computing (ICAC), volume 1, pages 293-298. IEEE.
Peng, Y., Huang, X., and Zhao, Y. (2017). An overview of cross-media retrieval: Concepts, methodologies, benchmarks, and challenges. IEEE Transactions on circuits and systems for video technology, 28(9):2372-2385.
Rauh, C. (2018). Validating a sentiment dictionary for german political language—a workbench note. Journal of Information Technology & Politics, 15(4):319-343.
Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Soleymani, M., Garcia, D., Jou, B., Schuller, B., Chang, S.-F., and Pantic, M. (2017). A survey of multimodal sentiment analysis. Image and Vision Computing, 65:3-14.
Souza, M. and Vieira, R. (2012). Sentiment analysis on twitter data for portuguese language. In International Conference on Computational Processing of the Portuguese Language, pages 241-247. Springer.
Sultana, S. and Shahnaz, C. (2014). A non-hierarchical approach of speech emotion recognition based on enhanced wavelet coefficients and k-means clustering. In 2014 International Conference on Informatics, Electronics & Vision (ICIEV), pages 1-5. IEEE.
Vadicamo, L., Carrara, F., Cimino, A., Cresci, S., Dell'Orletta, F., Falchi, F., and Tesconi, M. (2017). Cross-media learning for image sentiment analysis in the wild. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pages 308-317. DOI: 10.1109/ICCVW.2017.45.
Vinodhini, G. and Chandrasekaran, R. (2012). Sentiment analysis and opinion mining: a survey. International Journal, 2(6):282-292.
Waltinger, U. (2010). Germanpolarityclues: A lexical resource for german sentiment analysis. In LREC, pages 1638-1642.
Wang, Y. and Li, B. (2015). Sentiment analysis for social media images. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pages 1584-1591. IEEE.
Zadeh, A., Chen, M., Poria, S., Cambria, E., and Morency, L.-P. (2017). Tensor fusion network for multimodal sentiment analysis. arXiv preprint arXiv:1707.07250.
Zahn, N. N., Dal Molin, G. P., and Musse, S. R. (2021). Cross-media sentiment analysis on german blogs. In Anais do XLVIII Seminário Integrado de Software e Hardware, pages 114-122. SBC.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 The authors
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.