SCIENTIFIC NEWS AND
INNOVATION FROM ÉTS
The Science Behind Personalized Music Recommendations - By : Marie-Anne Valiquette,

The Science Behind Personalized Music Recommendations


Marie-Anne Valiquette
Marie-Anne Valiquette Author profile
Marie-Anne Valiquette obtained a Bachelor's degree in Mechanical Engineering at the École de technologie supérieure (ÉTS) in Montreal. She lives in Silicon Valley, California where she studies artificial intelligence through online platforms like Udacity and deeplearning.ai.

How are music recommendations made?

The featured image is from Pixabay.com, source. Public Domain.

Music is a personal experience, and describing what you love or dislike about a song or an artist is difficult. It makes the task of finding a new favourite song, and discovering new ones near impossible.

Online Music Curation

Multiple streaming music companies are trying to solve this problem with recommendation engines.

Songza logo representation

Figure 1 Songza logo

Back in the 2000s, Songza, now integrated into Google Play Music, started the online music curation scene using manual curation to create playlists for users. They hired “music experts” that would choose songs to create playlists listened by the users. Beats Music would later use the same strategy to create playlists. But there is a slight problem with this strategy: it does not take into account individual tastes.

Pandora logo representation

Figure 2 Pandora logo

Pandora was also one of the original companies in the music curation industry. They used a slightly more advanced approach: they manually selected song attributes by choosing a group of descriptive words for each track and tagged them accordingly. Then, Pandora’s algorithm would simply filter tags to make playlists of similar-sounding music.

Spotify logo representation

Figure 3 Spotify logo

A music intelligence agency from the MIT Media Lab called The Echo Nest developed a different approach to personalized music. The Echo Nest used algorithms to analyze the audio and textual content of music, allowing it to perform music identification, personalized recommendations, playlist creation and analysis. Spotify bought the company in 2014.

Last.fm logo representation

Figure 4 Last.fm logo

Finally, Last.fm used a different technique called collaborative filtering to recommend new songs or bands based on similarities between users.

Every major streaming service developed its ability to learn user tastes and recommend the right song at the right time. However, one major company is ahead in the game with its personalized music recommendation: Spotify’s Discover Weekly.  Discover Weekly is a playlist of songs that automatically appears each Monday in every Spotify user account. It analyzes the user’s listening history, focusing on the music that has played recently. It then compares that insight with other Spotify users. Scanning more than 2 billion playlists, the system finds tracks that are commonly listed alongside the music. Then, it groups those tracks together into a new, 30-song personalized playlist. How does Spotify recommend a list to each person each week?

How does Spotify's Discover weekly work?

Figure 5 Simplified flow chart: creation of Spotify’s Discover Weekly

Spotify combines three of the best strategies used by other services to create their own uniquely powerful Discovery engine:

  1. Collaborative Filtering models, which analyze listener behaviour.
  2. Natural Language Processing (NLP) models, which analyze texts.
  3. Audio models, which analyze raw audio tracks.

 

Collaborative Filtering Models

Collaborative filtering is most commonly seen in Amazon’s “customers who bought this item also bought…” feature or the Netflix “thumb up or down” feature. However, unlike Netflix or Amazon, Spotify doesn’t have those features with which users rate their music. Instead, Spotify’s data uses implicit feedback  like stream counts of the tracks users listen to, and whether they saved the track into their own playlist, or visited the artist page.

Natural Language Processing (NLP) Models

Spotify searches the web, constantly looking for blogs or articles about music to categorize what people are saying about specific artists and songs. Basically, it keeps track of which adjectives and language is used to describe a song, and which other artists and songs are also being discussed. Then, the algorithm associates a weight for each term or word used to describe the music or the artist and eventually creates the probability that someone will describe the musical piece or the artists in those terms.

Similarly, to perform collaborative filtering, the NLP model uses these terms and weights to create a vector representation of the song. These vectors are compared between musical pieces and determine if two pieces are similar.

Raw Audio Models

Unlike the two previous models, raw audio models take into account new songs. Spotify uses convolutional neural networks which is the same technology used in facial recognition. However, in this case, they adapted it to recognize audio data instead of pixels. Spotify understands the song’s characteristics and is able to describe them. It classifies similarities between sounds and songs and therefore recommends songs that users might enjoy based on their own listening history. It is a clever strategy since we all relate to different styles of music. Why? Because the brain creates musical memory templates based on past musical experiences. Then, it releases dopamine, the body’s feel-good drug, when listening to music that is similar to other music heard in the past.

Marie-Anne Valiquette

Author's profile

Marie-Anne Valiquette obtained a Bachelor's degree in Mechanical Engineering at the École de technologie supérieure (ÉTS) in Montreal. She lives in Silicon Valley, California where she studies artificial intelligence through online platforms like Udacity and deeplearning.ai.

Program : Mechanical Engineering 

Author profile


comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *