Home
  • English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Log In
    New user? Click here to register. Have you forgotten your password?
Home
  • Browse Our Collections
  • Publications
  • Researchers
  • Research Data
  • Institutions
  • Statistics
    • English
    • ÄŒeÅ¡tina
    • Deutsch
    • Español
    • Français
    • Gàidhlig
    • LatvieÅ¡u
    • Magyar
    • Nederlands
    • Português
    • Português do Brasil
    • Suomi
    • Log In
      New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Resources
  3. Journals
  4. International Journal of Autonomous Robotics and Intelligent Systems (IJARIS)
  5. Implementation of music emotion classification using deep learning
 
Options

Implementation of music emotion classification using deep learning

Date Issued
2025-06
Author(s)
Qing Xiang Sow
Universiti Malaysia Perlis
Eng Swee Kheng
Universiti Malaysia Perlis
Handle (URI)
https://ejournal.unimap.edu.my/index.php/ijaris/article/view/2258/1342
https://ejournal.unimap.edu.my/index.php/ijaris
https://hdl.handle.net/20.500.14170/14639
Abstract
Music plays a crucial role in shaping emotions and experiences, making its classification an important area of research with applications in therapy, recommendation systems, and affective computing. This study develops a deep learning-based system to classify music into three emotional categories: "Angry," "Happy," and "Sad." The dataset, consisting of 22 audio files collected from YouTube, was manually labelled, segmented into 30-second clips, and augmented using pitch shifting and time stretching to enhance diversity. Features were extracted using Mel-Frequency Cepstral Coefficients (MFCC) and spectral contrast to analyse the harmonic and timbral characteristics of the audio. Three deep learning models, CNN, CNN-LSTM, and CNN-GRU, were evaluated. CNN-GRU achieved the highest weighted accuracy of 99.10%, demonstrating superior performance. Future work includes adding more emotion categories, diversifying the dataset, exploring advanced architectures like transformers, optimising hyperparameters, implementing real-time applications, and conducting user studies to assess effectiveness. This research successfully developed and evaluated a music emotion classification system, contributing to advancements in the field.
Subjects
  • CNN

  • CNN-LSTM

  • CNN-GRU

  • Deep learning

  • MFCC extraction

  • Music emotion classif...

  • Spectral contrast

File(s)
Implementation of Music Emotion Classification using Deep Learning.pdf (797.78 KB)
Views
1
Acquisition Date
Oct 14, 2025
View Details
Downloads
1
Acquisition Date
Oct 14, 2025
View Details
google-scholar
  • About Us
  • Contact Us
  • Policies