Taking a leap into the deep learning space

< Blog home

How we're getting ready to offer personalised recommendations

The eighth Recommender Systems Amsterdam meetup took place at FD Media Group’s office in October. The venue for this media-themed meetup included a stage and bar suited for an intimate concert, making it a great place for an evening full of talks, Q&A, networking, drinks and Dutch finger-food. 

Our Head of Bibblio Labs, Robbert van der Pluijm, had the opportunity to speak about the scale challenge we're solving and share the first findings of our venture into deep learning. 

Here’s a transcript of his talk, Scaling a recommendation service - a threefold story:


 

A threefold story

"Hello, I'm Robbert van der Pluijm. Thank you for having me. I want to start off on a personal note: I went through my Twitter feed today and discovered an interesting study published by SAGE Journals recently. The study showed that people who consumed a low dose of alcohol had significantly better observer-ratings for their second language, specifically better pronunciation, compared with those who did not consume alcohol. The researchers called the phenomenon 'Dutch courage'. Let's embrace it tonight with a couple of beers.

Bibblio Labs at RecSys Amsterdam

"My talk is standing in between you and those beers, so tonight's menu will be snappy. First I'll share a bit about Bibblio. Then I'll tell you about what we've worked on, what we're doing right now and what we want to solve in the nearby future. So, a threefold story.

The villain

"I have a man crush on Christoph Waltz, so kicking off with a quote of his seemed like a good idea: "Well, you need the villain. If you don't have a villain, the good guy can stay home." In our case the villain being the content recommendations networks everyone loves to hate. For example, this morning, I pulled this Revcontent module [below] from The Atlantic. It figured out I was in The Netherlands, but other than that it offered a horrible experience. These recommendations try to entice the reader to click on irrelevant suggestions to send them to clickbait sites.

RecSysAmsterdam3

"We at Bibblio want to offer a totally different kind of experience. Our recommendation service helps publishers build genuine audience engagement by displaying relevant suggestions back to their own content.

The good guy

"A little bit about how our service works. Our customers, seen here: online publishers, libraries and course platforms, push their content to our API platform. Their content gets enriched and the customer requests the recommendations from an API endpoint.

Bibblio's system in a nutshell

“Publishers can retrieve recommendations for each of our algorithms independently. Our reason for adopting this modular format is to restore transparency and control to our clients, both important values for Bibblio. Publishers use our pre-built widgets or build their own. The customisable, pre-built module brings us tracking data. When they have their own widget in place they can ping us the user interaction data so our machine can learn.

Rethinking popularity

"Time for the first part of the threefold story. Something we've worked on already: building a local popularity recommender.

"The local popularity recommender draws upon our first algorithm we created that is intended to leverage interaction data. Before this, we were only able to draw upon document content, so we’re pretty excited. Thankfully, it turned out to be quite straightforward to implement, so we were able to receive quick feedback from our clients and their end-users.

Why did we build a local popularity recommender

"Building the local popularity recommender was a stab at creating a better way to show popular content to end-users. Ubiquitous modules like 'Most popular' are often static and offer content which is largely irrelevant for the individual end-user at that time. We wanted to rethink popularity for content websites.

"The problem with interaction data is that you have to deal with a learning period where early trends can be misleading. The worst thing you can do is to determine popularity and then not leave room to adapt. You should give yourself the chance to explore the possibility that other — old and new — items have a higher click rate. That insight led us to investigate multi-armed bandit algorithms.

Using multi-armed bandit algorithms

"With a multi-armed bandit - who we call Harry by the way - each ‘arm’ could be any one of the content items from the entire corpus we’re recommending from. This means that initial recommendations using this technique will appear random and this may degrade the quality of the end-user experience. So here comes TF-IDF to help us out. We use TF-IDF scores to limit the pool of ‘arms’: the set of arms from which the algorithm can select its action in each learning instant is restricted to the top performing recommendations that were obtained using the TF-IDF algorithm.

"Periodically we update our bandit algorithm using fresh interaction data that has become available since the last learning phase. As the experiment progresses, we learn more and more about the relative payoffs, and so do a better job in choosing good recommendations. We always incorporate arms that explore new items which come into the catalogue and as audience behaviour changes.

Streaming to the rescue

"Time for the second part of the threefold story. This includes something we are working on putting into production right now: a streaming architecture.

Challenges of scale and catalogue updating

"First the challenges we are aiming to solve with this. With TF-IDF vectorisation you have the complexity challenge of performing pairwise comparisons of different embeddings for the content items in your catalogue. We have quite a tight on-boarding pipeline for our clients, and we basically throttle that pipeline because we have to wait for a few hours for these pairwise comparisons to be completed. That’s not viable for some of our clients.

The scale challenge

"This is connected to the second challenge, which we call the update challenge. Some of our clients, for example The Day, who provide an online news service for schools, publish new articles on a daily basis. They want recommendations for these ‘fresh’ content items as quickly as possible. With our pre-streaming architecture we need to wait an hour or so before the recommendations for the freshly ingested items are available.

The challenge of catalogue updating

“We also have other types of clients, for example the Canadian Electronic Library [desLibris] who are not ingesting books on an hourly or even daily basis. In this case, we can afford to do a full or bulk index of the entire corpus overnight or when a reasonable number of content items have been accumulated in our system.

"This is our streaming architecture [below]. We have fresh items coming in, which get ingested and enriched. We’ve plugged in our ingestion into Watson Natural Language Understanding to produce metadata. We also present it as an option for clients to open for viewing by the user. Those items are then queued in order to be indexed into Redis. We have a butler handler process that determines which streaming mode to activate, and this draws upon optimisation heuristics. The heuristics are based on the size of the corpus, the number of new items, the nature of the client among other things. This determines whether we should perform a bulk index, which includes an all-against-all comparison and introduces quadratic complexity, or go down the streaming route.

Our Streaming Architecture

“We can perform a full vectorisation, meaning we recompute all the TF-IDF vectors and the dictionary. Otherwise, we have the option of performing partial vectorisation. We basically still do the full vectorisation, but we load the saved dictionary back in. And as I mentioned before, in both streaming cases we perform a partial distance matrix computation. We finally push all the recommendations into Redis.

Taking a leap into the unknown

 "And finally, time for the third part of the threefold story. I'd like to share a bit about our recently-started venture into deep learning. 

"Last August, Bibblio's CEO Mads Holmen and Lead Data Scientist Dr. Mahbub Gani made their way to Como, Italy, to attend the annual ACM Recommender Systems conference. They picked up some interesting research on deep learning. Worth noting is that YouTube's recommendations have been driven by deep learning since last year. It's still very much an area to explore, and not proven in many instances for recommenders yet.

Research in the recommendation systems community

"In this slide [below] we have listed the potential of deep learning over older machine learning. Embracing deep learning could enable us to craft a holistic feature-set that reduces manual intervention. This approach could allow Bibblio to continuously improve our results as we scale further, whereas the performance gains of for example collaborative filtering approaches frequently tail off after initial training.

From machine learning to deep learning

"Getting into deep learning isn't easy. It requires a vast amount of data, a robust data pipeline for handling both signals from content and readers, immense computing power and clear values defined for the system to optimise for. Using Bibblio’s existing data pipelines, nimble streaming architecture and ethics and design principles, we hope to be able to overcome these barriers.

Examples of signals in deep learning

"We've decided to launch a spike into this space using the Google Cloud Platform and TensorFlow. Here [above] you see a list of signals we can start to work with. The focus of our system will be to create a better web for end-users and publishers. This means our deep learning based recommender system should maximise for user satisfaction and retention while reaching a certain level of revenue and content coverage/diversity. We'll work closely with our clients and end-users to develop more concrete ideas.

"This leap into deep learning is an exciting, novel and risky endeavour. The work is on the way. Shall we agree I come back in a couple of months and talk about the results?"



 


 

Bibblio solves the problems of audience retention and engagement by showing each user the best of your own content.

Recommended for you