Predictions
Within KIT's Predictions tool there are various contact Insights that support you in understanding your contact's behavior as well as predicting their future behavior. The purpose of these Insights and Predictions is to support your organization in making data-driven decisions by understanding and analyzing how your contacts interact with you as well as how they are likely to interact with you in the future.
Accuracy
Predictions are only useful if they have a high accuracy rate. Accuracy can be understood as the quality or state of being correct or precise. In order to measure the accuracy of KIT's predictions, as in how 'correct' they are, KIT tests the machine learning models that are used to produce the predictions. Think of these models as the robot behind the scenes analyzing your data and making predictions (e.g. that someone will donate).
In order to test the accuracy of the model’s predictions, KIT runs existing historical data through the models and compares the model’s predictions (or outcomes) to what actually occurred within this historical data set.
Precision and Recall
Once the model produces predictions, KIT measures the accuracy of those predictions using precision and recall.
- Precision: Measures how many times the model guessed right within a particular ‘class’ within that data. For example, how many people that the model predicted would donate actually donated.
- Recall: Measures how many bits of data the model guessed right within the entire data set. For example, if the model predicted 5 people would donate in total, and the total outcome was that 6 people donated, it means the model guessed 5 out of 6 correctly and missed 1.
To put this into context, let’s take a look at the example above. This model predicted that 5 people would not donate and that 5 people would donate. If we assume green represents someone who donated, we can see that 3/5 of those who were expected to donate donated. We can also see when looking at the entire data set, out of all of those who donated, 3/4 fell into the correct class (donated). In terms of accuracy, this tells us:
- Precision (how many the mode guessed right within the specified class) = 3/5 or 60%
- Recall (how many the model guessed right within the entire data set) = 3/4 or 75%
Donor Readiness Scenario
Now that you are familiar with how KIT measures the accuracy of models, let’s take a look at one of KIT’s prediction models to bring this to life. The ‘Donor Readiness’ prediction measures how ready someone is to donate in the next 2 weeks.
In this case, it was important to focus on improving the ‘recall’ metric, as we want to move as many people as possible into the ‘likely to donate’ class (think of moving the white bar in the image below to the left, so more donors fall into the recall category). The higher the ‘recall’ metric, the more times the model guessed right. If we focused too much on improving the ‘Precision’ metric, there would be a higher chance of excluding some people who are likely to donate.
With that in mind, how did KIT’s model perform on the 2.3 million data points it was trained on?
- Accuracy 91%
- Recall 60%
- Precision 62%
Recall tells us that, out of everyone likely to donate in the next two weeks, 60% of donors were accounted for in the ‘likely to donate’ class’ The model missed only a few.
Precision tells us that, out of the 60% who were ‘likely to donate’, 62% had actually donated. This level of precision is considerably higher than the industry average for conversion rates.
Conclusion
To continuously improve the accuracy of the models that are used to generate predictions, the models are trained on a bi-weekly basis. The more data that is run through KIT’s system, the more accurate the models get as they continue to improve their ability to predict outcomes.