/ machine learning

Analysis of model degradation for prediction of customer preferences over time and ways to fix it

Decision tree algorithm

One of the machine learning approaches that we use at SO1 to predict consumer preferences makes use of decision tree algorithms (models). We use specific features based on consumer preferences, information about time intervals from previous product purchases, the time of year, and others. The decision tree algorithm predicts, with great accuracy, which products consumers will need the following week, as well as what could be offered to them among products they have never bought. Furthermore, the algorithm predicts on which products users will want a discount.

There are two ways to receive data from our clients. "Online mode" is when we receive data constantly and improve models on the fly (mostly algorithms and models based on deep reinforcement learning). "Periodically" is when we receive data from time to time and retrain models after receiving a specific amount of data. The drawback to retraining models is that the process is expensive, requiring computing resources and sometimes the attention of specialists.

So how often should the algorithms be retrained? To maintain high-quality models for our clients, algorithms should ideally be retrained with each data delivery. On the other hand, to optimize costs, it should be done as rarely as possible.

To understand how to strike a balance between data changes and rapidly rising costs, and to answer our question, we first need to understand why deterioration occurs over time.

Model Degradation

‘Model degradation’ means deterioration of the model's quality characteristics over time. To see how elementary this phenomenon is, take a set of data, teach the model in the first four weeks, and then calculate the quality in the following weeks.

To simplify further explanation we will calculate the quality for equal time periods – always per week. As a result, we get the following graph (the red one). As we can see, quality is gradually falling over time. Definitely, there is a degradation of models.

Let's take data from another client and another model with other features. Here, quality varies, but there is no apparent downward trend (a green graph).

boxplot

Why does deterioration of data occur over time?

Degradation may depend on the following:

  1. Customers’ habits, natural anomalies, one-time events. For many product categories, user preferences can change over time, and the decision tree algorithm is not able to automatically adjust to these changes, unlike online models based on reinforcement training.
  2. Improving quality at the expense of generalizing abilities. Some features that are based on time intervals are beneficial for building predictions but may be unstable.
  3. Data quality. Sometimes, due to various reasons, the condition of the data coming from retailers can worsen (e.g. missing information, time gaps, data misalignment), which can lead to a significant decrease in the quality of the predictions. To fix this, the model needs to be retrained with new realities in mind.

Solution for Model Degradation Problem

Let's do the following experiment.
Week by week we will add data to the training sample. For each data set (from the 1st week to the Nth week) we will to teach 10 models (only one has to be trained, but we use 10 for statistical purposes). Then we will measure the quality of the models for each week. As a result, we get the following matrix:

Quality_matrix

For example, the element at the intersection of the 6th row and the 8th column means the following:
0.9 (AUC metric) – The average quality in the 7th week of models trained on data for 1-5 weeks

There is an interesting pattern in this matrix. In any single row, we see the degradation of quality as the weeks go by. In any single column, we see how the quality improves when data is being added to the training. At the beginning of the column, there is a quality improvement, because the original data is insufficient. Then another one, because the new data is 'located' closer to the test week in the timeline. The most significant increase occurs when adding a test week to the training.
The simplest example of how the quality of the model improves as the training dataset approaches the test weeks can be seen during the last week:

regplot

To summarize, depending on the task, degradation of the model might occur. If retailers' products are affected by seasonal fluctuations and/or fashion trends, the model should be re-checked and re-trained.

Unlike empirical algorithms based on strict rules, machine learning algorithms can be retrained quickly (offline or online). This allows us to respond to changes in the behavior of customers at run-time and avoid losing money due to seasonal changes. In an industry where every process is time and quality sensitive, knowledge-intensive algorithms have an advantage over all others.


PS: SO1 is one of the driving forces of retailer digitalization. We have created a very powerful AI for retail which is capable of personalizing promotions for users in real-time and across devices. The SO1 Engine sources the entire portfolio of the retailer and automatically selects the right products for each individual consumer and adjusts discounts such that revenue, profit, or consumer satisfaction are maximized.
SO1-Recruiting-Scala-V3

Dmitrii Ilin

Dmitrii Ilin

Ph.D. Data scientist and machine learning engineer with significant industrial working experience. Also, publications in scientific journals, winner of international competitions and a kaggle master.

Read More