Many websites and data providers sell or give free football/soccer predictions. We have not checked every one of them but 99% won’t give you any performance measure.
This article is not about presenting new techniques but about showing what concrete and live machine algorithms have done in terms of performance during Euro 2020.
We have all the 51 matches predicted and recorded the odds of many bookmakers for the final time result so-called 1x2. Now it is time to analyze and show what the models have done compared to the market. All the data used in this article can be obtained upon request.
Performance against the bookmakers
Before talking about performances let’s review the prediction data. We have the prediction made by the machine learning algorithms on one side and the prediction made by the bookmaker (the market) on the other side.
The machine learning predictions¹ are from two separated models. The first model (the regular model) is the regular model used for all types of competition. It is used mainly for domestic competition as teams get used to playing against each other.
The second model (the cup model) is different and uses player information. This allows the model to build any lineup to predict the outcome of a match between two new teams. This model has two versions, one before the official lineup's release (pre) and one after (last) which occurs around an hour before kick-off.
The predictions and odds used in this article have been computed before each match during the cup and stored in a database. As a consequence, all the presented results are out-of-sample.
The market predictions can be easily derived from bookmakers' odds. For that, you can use an implied probability method. For this study, we have used the multiplicative method which is the most common.
Now we have the prediction, the next step is how do we measure the performance? We will use two measures:
The hit ratio: the average percentage of correct prediction
The log loss: the quality of the prediction
As we only have 51 matches, these performance measures may be very noisy. Especially for the log loss where a bad prediction can have a large negative impact on the loss.
The odds data were collected among several bookmakers to a maximum of 5 minutes before the match kick-off. We have a total of 28 bookmakers although they were not all available for each game. For instance, we have odds for Marathonbet (36 games), Pinnacle (36 games), and Betway (43 games) that were recorded in the last 5 minutes before the match starts.
We will take the average odds and turn them into probabilities using the implied method to get the market predictions. Then we compute the performances measure for each model and the bookmakers. We also added the consensus which is the number of times the model and the bookmakers were agreed.
First, we observe that bookmakers predicted 27 results correctly for a log loss of -0.993. The best model is the cup model updated with the last lineup with 26 correct predictions among 2 were not the same as the bookmakers. The models perform really well even if they did not beat the bookmakers regarding the performances.
But, as shown in our previous article, beating the bookmaker is not just a prediction game. Looking at the model performances in terms of profit generated against the bookmakers can tell another story.
Did machine learning beat the books?
Beating the bookmaker does not mean that you make better or equal predictions, it means that you can make money in the long run and use the probabilities of your models as the confidence you have. Unfortunately, the long run goes beyond 51 matches. Still, we can compute the profit and loss that would have occurred.
As previously we will use the average odd available 5 minutes before the match starts. We allowed a maximum of 10€ invested per match and the bet size is given by the model probability which reflects our confidence in the bet. In addition to that, a bet is taken if the model probability times the odd is higher than 1. The realized profit and loss are shown in the next figure.
Each model has been able to generate profit during Euro 2020. The competition has been very uncertain with 50% of offered odds higher than 2.6 on each result. First, we observe that even if our models did not beat the bookmakers in terms of pure performance, they have been able to generate profit. Model performances were in line with the market (hit ratio) and there is still useful information in the probabilities.
We remark that the cup model updated with the last lineup has generated more profit than the pre lineup model making that information useful.
The next table (profit and stake in € )shows that the regular model outperformed others despite that the cup model had the best log loss.
The stake is the total amount bet by the model (in €). The yield shows the percentage of returns for the amount invested which is the profit/stake. For instance, the regular model would have generated 79.5 euros for 282.6 euros invested with a success rate of 52%.
There is no clear evidence that getting the best performance in terms of log loss or hit ratio will ensure that a model will generate profit.
Euro 2020 has been very exciting in terms of prediction. Bookmakers and models had to deal with a lot of new variables for international competition: various venues around the continent, the number of fans allowed in the stadium, temperatures, covid restriction…
Those created a lot of uncertainty around matches. Nevertheless, bookmakers still had a decent performance as well as machine learning models.
In this article, we present the out sample results of two machine learning models. Although their performance was slightly under the market, they have generated profit over the 51 matches in the cup.
The material contained in this article is intended to inform and educate the reader and can not be taken as advice or recommendation. Besides, the fact that our models generated profit during Euro 2020 does not mean they will in the future for any competition.
: Machine learning predictions were provided by our API.