Reconocimiento de emociones musicales a través de datos y tecnologías digitales

Contenido principal del artículo

Juan David Luján Villar
Roberto Carlos Luján

Resumen

Este estudio explora el campo de investigación denominado reconocimiento de emociones musicales, una perspectiva transdisciplinaria que aborda la investigación de los estados anímicos y las emociones musicales a través de la recuperación de datos y diversos tipos de análisis informáticos y analógicos. Se plantean algunas preguntas con la finalidad de exponer sus principales presupuestos y recientes perspectivas, lo cual guía a nivel metodológico dos experimentos que exponen los planteamientos centrales de este tipo de enfoques y evidencian una serie de posibles aplicaciones prácticas. 

Descargas

Los datos de descargas todavía no están disponibles.

Detalles del artículo

Cómo citar
Luján Villar, J. D., & Luján, R. C. (2020). Reconocimiento de emociones musicales a través de datos y tecnologías digitales. Comunicación Y Hombre, (16), 59–82. https://doi.org/10.32466/eufv-cyh.2020.16.563.59-82
Sección
Estudios
Biografía del autor/a

Juan David Luján Villar, Secretaría de Educación del Distrito, Bogotá, Colombia

Juan David Luján-Villar. ORCID ID. https://orcid.org/0000-0001-8622-4774. Licenciado en Edu. Artística. Magister en Investigación Social Interdisciplinaria. Docente de la Secretaría de Educación Distrital. Miembro del Grupo de Investigación Literatura, Educación y Comunicación (LEC), Universidad Distrital Francisco José de Caldas, Bogotá, Colombia. 

 

Roberto Carlos Luján, Estudiante Doctorado en Salud, Universidad Del Valle,Cali, Colombia.

Roberto Carlos Luján Villar. ORCID ID.  https://orcid.org/0000-0001-6435-4412. Sociólogo. Estudiante Doctorado en Salud, Universidad Del Valle. Investigador asociado de la Fundación para el Desarrollo de la Salud Pública (FUNDESALUD). Cali, Colombia.

Citas

Adams, Kyle (2015). The Musical Analysis of Hip-Hop. En Williams, Justin A. (ed). The Cambridge Companion to Hip-Hop (pp. 118-134). Cambridge U.K.: Cambridge University Press. https://doi.org/10.1017/CCO9781139775298.011

Aljanaki, A. (2016). Emotion in Music: representation and computational modeling (tesis doctoral). Utrecht University, Utrecht.

Bharadwaj, Sushrutha, Hegde, Shantala, Dutt, Narayana & Rajan, Anand (2018). Application of Nonlinear Signal Processing Technique to Analyze the Brain Correlates of Happy and Sad Music Conditions During Listening to Raga Elaboration Phases of Indian Classical Music. En Parncutt, Richard, & Sattmann, Sabrina (eds.), Proceedings of ICMPC15/ESCOM10 (pp. 83-84). Graz, Austria: Centre for Systematic Musicology, University of Graz.

Calvo, R.A., D’Mello, S., Gratch, J. & Kappas, A. (eds.). (2014). The Oxford Handbook of Affective Computing. Oxford: Oxford University Press. https://DOI: 10.1093/oxfordhb/9780199942237.001.0001

Cardoso, L., Panda R. & Paiva R.P. (2011). MOODetector: A Prototype Software Tool for Mood-based Playlist Generation. Simpósio de Informática – INForum 2011, 124.

Chase, W. (2006). How Music Really Works!, Vancouver, Canada: Roedy Black.

Cooper, Matthew, L. & Foote, Jonathan (2002). Automatic music summarization via similarity analysis. En Fingerhut, Michael (ed.), Proc Int Society Music Information Retrieval Conf (ISMIR) (pp. 81–85). Paris: Ircam - Centre Pompidou.

Deng, J.J., Leung, C.H., Milani, A. & Chen, L. (2015). Emotional states associated with music: classification, prediction of changes, and consideration in recommendation. ACM Trans. Interact. Intell. Syst. (TiiS) 5(1), 1-36. https://doi.org/10.1145/2723575

Gabrielsson, A., Lindström, E. (2001). The influence of musical structure on emotional expression. En Juslin P.N. & Sloboda, J.A. (eds.), Music and Emotion: Theory and Research (pp. 223-249). Oxford: Oxford University Press.

Grekow, Jacek (2018). From Content-based Music Emotion Recognition to Emotion Maps of Musical Pieces. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-70609-2_2

Gingras, Bruno, Marin, Manuela M. & Fitch, W. Tecumseh (2014). Beyond intensity: Spectral features effectively predict music-induced subjective arousal. The Quarterly Journal of Experimental Psychology, 67(7), 1428-1446. https://doi.org/10.1080/17470218.2013.863954

Hevner, K. (1936). Experimental studies of the elements of expression in music. The American Journal of Psychology, 48(2), 246-268. https://doi.org/10.2307/1415746

Hu, X., & Downie, J. S. (2007). Exploring mood metadata: Relationships with genre, artist and usage metadata. En Proceedings of the 8th International Conference on Music Information Retrieval, ISMIR 2007 (pp. 67-72). Vienna: ISMIR.

Hu, X., Downie, J.S., Laurier, C., & Ehmann, M.B.A.F. (2008). The 2007 MIREX audio mood classification task: Lessons learned. En Bello, J.P., Chew, E. & Turnbull, D. (eds.), Proceedings of the International Symposium on Music Information Retrieval (ISMIR) 2008 (pp. 462-467). Philadelphia: Drexel University.

Kim, Y.E., Schmidt, E.M., Migneco, R., Morton, B.G., Richardson, P., Scott, J., Speck, J. A. & Turnbull, D. (2010). Music emotion recognition: a state of the art review. En Downie, J.S. & Veltkamp, R.C. (eds), International Society for Music Information Retrieval Conference - ISMIR’2010 (pp. 255-266). Utrecht: ISMIR.

Juslin, P. (2019). Musical Emotions Explained. Unlocking the Secrets of Musical Affect. Oxford: Oxford University Press.

Juslin, P.N. & Laukka, P. (2004). Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening. Journal of New Music Research, 33(3), 217–238. https://doi.org/10.1080/0929821042000317813

Laurier, C. (2011). Automatic Classification of Musical Mood by Content-Based Analysis (tesis doctoral). Universitat Pompeu Fabra, Barcelona.

Laurier, C., Grivolla, J., & Herrera, P. (2008). Multimodal music mood classification using audio and lyrics. En Wani, M.A. (ed.), Proceedings of the International Conference on Machine Learning and Applications, San Diego, CA (pp. 688–693). Piscataway, NJ: IEEE. https://doi.org/10.1109/ICMLA.2008.96

Macdorman, K.F., Ough, S., & Ho, C.-C. (2007). Automatic emotion prediction of song excerpts: Index construction, algorithm design, and empirical comparison. J. New Music Res. 36(4), 281-299. https://doi.org/10.1080/09298210801927846

Mehrabian, A. (1996). Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14(4), 261-292. https://doi.org/10.1007/BF02686918

Panda, R. Malheiro, R. & Paiva, R.P. (2018a). Musical Texture and Expressivity Features for Music Emotion Recognition. En Gómez, E., Hu, X., Humphrey, E., & Benetos, E. (eds.), 19th International Society for Music Information Retrieval Conference, Paris, France, 2018 (pp. 383-391). Paris: ISMIR.

Panda, R. Malheiro, R. & Paiva, R.P. (2018b). Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2018.2820691

Panda, R. & Paiva R.P. (2011). Using Support Vector Machines for Automatic Mood Tracking in Audio Music. En Audio Engineering Society (eds.), Proceedings of the 130th Audio Engineering Society Convention, 2011 - AES 130, London, UK (pp. 579-586). New York: AES-Curran.

Picard, R.W. (2000). Affective Computing. Cambridge: MIT Press.

Russell, J.A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178. https://doi.org/10.1037/h0077714

Schedl, Markus, Gómez, Emilia & Urbano, Julian (2014). Music information retrieval: Recent developments and applications. Foundations and Trends in Information Retrieval, 8(2–3), 127-261.

Schubert, E. (1999). Measurement and time series analysis of emotion in music (tesis de Ph.D.). School of Music Music Education, Univ. New South Wales, Sydney, NSW, Australia.

Schubert E., Ferguson S., Farrar N., Taylor D., & McPherson G.E. (2013). The Six Emotion-Face Clock as a Tool for Continuously Rating Discrete Emotional Responses to Music. In: Aramaki M., Barthet M., Kronland-Martinet R., & Ystad S. (eds), From Sounds to Music and Emotions. CMMR 2012 (pp. 1-18). Springer: Berlin, Heidelberg.

Schmidt, E.M. & Kim, Y.E. (2010). Prediction of time-varying musical mood distributions from audio. En Downie, J.S. & Veltkamp, R.C. (eds), International Society for Music Information Retrieval Conference - ISMIR’2010 (pp. 465-470). Utrecht: ISMIR. https://doi.org/10.1109/ICMLA.2010.101

Speck, J.A., Schmidt, E.M., Morton, B.G., & Kim, Y.E. (2011). A comparative study of collaborative vs. traditional musical mood annotation. En Klapuri, A. & Leider, C. (eds.), Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), Miami, Florida, (pp. 549-554). Miami: ISMIR.

Thayer, R.E. (1989). The biopsychology of mood and arousal. Oxford University Press, Oxford.

Williams, Justin A. (2015). Intertextuality, sampling, and copyright. En Williams, Justin A. (ed). The Cambridge Companion to Hip-Hop (pp. 206-220). Cambridge U.K.: Cambridge University Press. https://doi.org/10.1017/CCO9781139775298.018

Xiao, Z., Dellandrea, E., Dou, W., Chen, L. (2008). What is the best segment duration for music mood analysis? En 2008 International Workshop on Content-Based Multimedia Indexing (pp. 17-24). New York: IEEE.

Yang, Y.-H. & Chen. H.H. (2011). Music Emotion Recognition. Boca Raton, Florida: CRC Press. https://doi.org/10.1201/b10731

Yang, Y.-H., Su, Y.-F., Lin, Y.-C., & Chen, H.H. (2007). Music emotion recognition: The role of individuality. En Jaimes, A., Sebe, N. (eds.), Proceedings of the ACM International Workshop on Human-Centered Multimedia (pp. 13-21). New York: ACM. https://doi.org/10.1145/1290128.1290132

Yang, Y.-H., Lin, Y.-C., Su, Y.-F., & Chen, H.H. (2008a). A regression approach to music emotion recognition. IEEE Trans. Audio, Speech Lang. Process. 16(2), 448-457. https://doi.org/10.1109/TASL.2007.911513

Yang Y.-H. & Chen H.-H. (2012). Machine recognition of music emotion: a review. ACM Transactions on Intelligent Systems and Technology, 3(3), Article 40, 1-30. https://doi.org/10.1145/2168752.2168754

Yang, Xinyu, Dong, Yizhuo, Li, Juan (2018). Review of data features‑based music emotion recognition methods. Multimedia Systems, 24(4), 365-389. https://doi.org/10.1007/s00530-017-0559-4

Zhang, J.L., Huang, X.L., Yang, L.F., Xu, Y., & Sun, S.T. (2015). Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods. Multimedia Systems, 23(2), 251-264. https://doi.org/10.1007/s00530-015-0489-y