Background music recommendation model in SSRC

Background music recommendation model

In this study, we propose a multi-modal music recommendation model that recommends music using video and keywords. First, for the background music recommendation model, a study was conducted to build a space where an appropriate distance from video, music, and keywords can be obtained using machine learning. After that, when video and keywords are entered, a system is created to recommend music that is close to the distance.

Background music recommendation using video and keyword

In this study, we propose a multi-modal music recommendation model that recommends music using video and keywords. First, for the background music recommendation model, a study was conducted to build a space where an appropriate distance from video, music, and keywords can be obtained using machine learning. After that, when video and keywords are entered, a system is created to recommend music that is close to the distance.


Multi-Modal Metric learning for background music recommendation

Video and music have very different characteristics of data, so we must map to an embedding space where we can find the distance between the two. In this study, the model was divided according to whether the existing association was used and whether keyword existed in the music DB. At this time, the embedding space is trained using metric learning, one of the machine learning techniques, and if not used, it uses a continuous emotion space, Valence-Arousal space

Background music recommendation model using emotion space and keyword

Among the previously proposed models, the model using emotion space and keywords at the same time showed the best performance, and you can directly recommend background music at the link below.