In this post I am proud to show a piece of work done in collaboration with Oren Greenberg from Kurve.

A common problem in online advertising is identifying which online ads are the most successful. For example, let’s assume that we are using 10 different online ads in order to promote a new mobile app. The ads have a different number of impressions and clicks. Which ads are the best performing ones?

Let’s say we have the following set of data:

Ad ID Clicks Impressions Click Through Rate
1 430 2530 0.169960474
2 71 3128 0.02269821
3 8 1061 0.007540057
4 1174 157711 0.007443996
5 128 18240 0.007017544
6 13 1899 0.006845708
7 13 1909 0.006809848
8 54 100 0.54
9 10 110 0.090909091
10 500 6559 0.076231133

You can quickly see that by just eyeballing it, it becomes difficult to see which ads perform best and which do not. Some challenges are:

  • Ads with very different impressions might have similar performance (e.g. Ad3 and Ad4). So, are they both the same in terms of performance?
  • Some Ads have a small number of impressions. How confident can be we be in our assessment of their performance?
  • Which ad is the best overall taking into account both the impressions and the clicks?

In order to solve this problem, I collaborated with Oren in order to create a dashboard based on a unique Bayesian methodology. By feeding the data to this dashboard we get back these results:

results of the bayesian ads dashboard

You can see here that the ads ranked in order of performance from the best to the worst. Secondly, we get a measure of confidence in our conclusion.

As the marketeer, you want two things:

  1. Choose the best performing ads with a good degree of confidence.
  2. Give the good performing ads with low confidence some extra impressions, until you realise whether their performance is actually good or it was just a coincidence.

So, in this case, if we had to pick out one ad, it would be Ad 1. It looks like Ad 1 is a good performer, and we are also extremely confident in our conclusion regarding that.It would also be good to display Ad 8 a few more times to see whether it is indeed as a good performer as we are suspecting it might be.

This approach is way more powerful than simply using the Click Through Rate as an indicator of performance, as it is based on a robust statistical model and takes confidence into account, as well. Simply load all your ads in the required format and you’re good to go! Feel free to get in touch for any comments,  questions or suggestions.


Technical comments

Ad 8 which has 54 clicks over 100 impressions gets the best performance score (close to 40% click-through-rate), but we are not confident in our conclusion, due to the small number of impressions. For Ad 9 however, which has similar views to Ad 8 (110) but far less clicks than it (10), we are extremely confident in our conclusion that it is a decent performer. Why? Because its results do not deviate much from the mean as measured by the rest of the ads. This intuitively makes sense. If an ad is not much different to the rest of the ads, it is easy to say with confidence that it’s performance is typical. However, for ads with performance that is different to the mean (either much better or much worse), we need more samples to reach a confident conclusion.