Predicting Horse Racing With ML
Predicting Horse Racing With ML
Author: Supervisor:
Yide L IU, Zuoyang WANG Prof. Michael LYU
in the
LYU 1703
Faculty of Engineering
Department of Computer Science and Engineering
Declaration of Authorship
We, Yide L IU, Zuoyang WANG, declare that this report titled, “Predicting Horse Rac-
ing Result with Machine Learning” and the work presented in it are our own.
The report is a collaborated work of both authors. In this report, Yide is responsible
for Chapter Introduction, Data Preparation and Conclusion. He also provides model
performance metrics and figures . Zuoyang is responsible for Chapter model design
and results, contributing to investigating data and figures. Both authors share their
views and understandings in the discussion section and revise the report for cross
proving.
v
Thomas Dewar
vii
Abstract
Faculty of Engineering
Neural networks with a large number of parameters are very powerful machine
learning systems. While neural networks has already been applied to many sophis-
ticated real-world problems, its power in predicting horse racing results has yet not
fully explored. Horse racing prediction is closely related to betting and the netgain
is considered a benchmark of performance. This project offers an empirical explo-
ration on the use of neural networks in horse racing prediction. We constructed
augmented racing record dataset and raceday weather dataset and examine architec-
tures in a wide scale. We showed that neural networks can identify the relationship
between horses and weather and our models can achieve state-of-the-art or competi-
tive results. Comparisons are provided against traditional models such as win-odds
betting and performance-based betting and also learning models in LYU1603.
ix
Acknowledgements
We would like to express my special thanks of gratitude to our supervisor Prof.
Michael Lyu as well as our advisor Mr. Edward Yau who gave us the golden oppor-
tunity to do this wonderful project on the topic ’Predicting Horse Racing Result with
Machine Learning’, which also helped us in doing a lot of Research and we came to
know about so many new things and we are really thankful to them.
Secondly, we would also like to thank our friends who helped us a lot in finalizing
this project within the limited time frame.
xi
Contents
Abstract vii
Acknowledgements ix
1 Overview 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Pari-mutuel betting . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Types of bets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Finishing time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Horse to win the race . . . . . . . . . . . . . . . . . . . . . . . . . 6
Horse ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Data Preparation 9
2.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Horse Racing Record . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Weather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Horse Racing Features . . . . . . . . . . . . . . . . . . . . . . . . 13
Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Win odds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Weight difference . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Old place . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.2 Weather Features . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.1 Real Value Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.2 Categorical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
xii
3 Model Architecture 23
3.1 Deep Neural Network Regressor . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.3 Evaluation Standard . . . . . . . . . . . . . . . . . . . . . . . . . 25
Bibliography 55
xiii
List of Figures
4.26 Net gain of 1 models from undivided data set (1m steps . . . . . . . . . 44
4.27 Net gain of combination of 2 models from undivided data set (100k
steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.28 Net gain of combination of 2 models from undivided data set (1m steps 45
4.29 Net gain of combination of 3 models from undivided data set (100k
steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.30 Net gain of combination of 3 models from undivided data set (1m steps 46
4.31 Net gain of combination of 4 models from undivided data set (100k
steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.32 Net gain of combination of 4 models from undivided data set (1m steps 47
4.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.34 min-max finishing time distribution of model (1k steps) . . . . . . . . . 50
4.35 Finishing time difference of first 2 horses of model (1k steps) . . . . . . 51
xv
List of Tables
Chapter 1
Overview
This topic on final year project is predicting horse racing result with machine learn-
ing, throughout this report we will demonstrate the work done during the first
semester. This chapter offers a brief overview to this final year project and intro-
duction to the topic. Moreover, it provides related work and previous approaches
on the horse racing predictions. In the end, it introduces the difficulties in predicting
horse racing results.
1.1 Introduction
deep networks with exceeding 1000 layers have been studied and employed (He et
al., 2016). While it helps to go deeper in network structure, the study of neural net-
works requires researcher to start from a very beginning of smaller version. These
approaches are accord with the nature of neural networks: the exceeding number
of neurons and parameters, the large carnality of hyper-parameter space, the appro-
priate score function and insidious structure issues. Since training a deep neural
network takes days and months to run (Vanhoucke, Senior, and Mao, 2010), it is
reasonable to train network with simple structure in order to accelerate the research
progress.
The horse racing events, while they are commonly considered a special kind of
game, follows similar characteristics shared with stock market predictions where
1.2. Background 3
futures performances are related to precious and current performances to some ex-
tent. On the other hand, unlike games of perfect information such as GO1 and PEN-
TAGO2 (Méhat and Cazenave, 2011), the optimal value function, which determines
the outcome of a game, is not well-defined (Silver et al., 2016). While the horse
racing prediction problem being a mixture of imperfect information and stochastic
randomness (Snyder, 1978a), previous naive approaches fails to capture the critical
information and produce few promising results. To the best of our knowledge, cur-
rent horse racing prediction is limited and the results are under satisfaction.
In this final year project, we scrutinize features of horse racing events and predict
horse racing results directly through finishing time. The rest of this report is or-
ganized as follows: Chapter2 illustrates how first-hand data is collected and struc-
tured. Moreover it provides prudent statistical analysis on related features and data
standardization. The model design and configurations along with comparison mod-
els are presented in Chapter3. In Chapter4, we review the prediction metrics and
present the experimental results. Our understandings and interpretations on the
results are discussed in Chapter5. In the end, we conclude the accomplishment
achieved in this term and offer possible research directions in next semester.
1.2 Background
Horse racing is a sports to run horses at speed. Horse racing is not only a profes-
sional sports but also of a beloved entertainment of betting in Hong Kong. Every
season, hundreds of races are held respectively in Shatin and Happy Valley race-
courses at different tracks and distance. In each race, 8-14 horses runs in a row for
the fastest and various bet types are created for entertainment on the result of the
races.
Horse racing events are managed by the Hong Kong Jockey Club (HKJC). HKJC is a
non-profit organization to formulate and develop horse racing, sporting and betting
entertainment in Hong Kong. Moreover, it is the largest taxpayer and community
benefactor in Hong Kong. It holds a government-granted monopoly in providing
pari-mutuel betting on horse racing. In the history of horse racing in Hong Kong,
the HKJC plays a essential role in promotion and regulation and combines the bet-
ting entertainment into this sports. "With strict rule enforcement, betting fairness
1 https://en.wikipedia.org/wiki/Go_(game)
2 https://en.wikipedia.org/wiki/Pentago
4 Chapter 1. Overview
and transparency, the HKJC has taken the Hong Kong racing to a world-class stan-
dard and also earned itself an enviable global reputation as a leading horse racing
organization."
Betting is the most fascinating attraction of horse racing by the nature of pari-mutuel
betting system. Pari-mutuel betting is a betting system in which the stake of a par-
ticular bet type is placed together in a pool, and the returns are calculated based on
the pool among all winning bets (Riess, 1991).
Dividend is divided by the number of winning combinations of a particular pool.
Winners shares the percentage of pool payout proportional to their betting stakes
and taxes are reducted from the dividend in a particular ratio.
There are multiple types of bets of a single race as well as multiple races. The fol-
lowing tables from the HKJC website provides an explanation of each betting type.
1.2. Background 5
1.2.3 Methodology
Intriguingly, there are many possible ways to interpret horse racing results and a few
are studied in previous studies. In this research, we takes finishing time approach to
model horse performance. Moreover, we try to bet on the best horse with the fastest
estimated finishing time.
It is worth mentioning that it is still an open topic to model horse racing results and
different approaches cannot avoid the deficiency in predictions and betting. In this
section, we provides pros and cons for most common approaches in other researches.
6 Chapter 1. Overview
Finishing time
One way to deal with this problem is to build a regression model. In this project,
we train a supervised neural network regressor on the finishing time and then ranks
each horse base on the calculated predicted time. The model takes individual records
of race information into account and then learn the relationship in a general way.
However, due to the nature of this approach, the predicted time of horses in a race
is less reasonably distributed. In some cases, max-min finishing time reaches up to
10 seconds.
Another way to solve the problem is naturally predicting a horse to win or not.
However, directly doing binary classification of win or lose is unfeasible because the
dataset will be unevenly distributed with less than 10% horses marked with "win"
(or equally "1").
A tricky approach is to directly investigate the logistics of the results and rank them
in every race. Equivalently, given the credibility of a model, we can design a betting
strategy to ensure positive expectation net gain.
Horse ranks
The third way to predict the horse racing result is directly predicting the horse ranks.
However, due to the same issue mentioned above, ranks in a races can be duplicated
and hence it is unreasonable to view the problems in this way.
1.2.4 Objective
In this project, we restrict our discussion on "win" and "place" bets, which is the
most effective way to reflect the model efficiency and explore the relationship be-
tween betting gain and prediction accuracy. Our objective is to create a model to
predict horse racing results and beat public intelligence in betting net gain.
1.2. Background 7
The following table revisits the "win" and "place" bets and defines the prediction ac-
curacy of each type (Accuracywin , Accuracy place ).
To distinguish the prediction accuracy of finishing time and Accuracywin , Accuracy place ,
we first define the avg_loss of which represent the average mean-square-error (MSE)
between the prediction finishing time and the ground truth. We later define the
Accuracy to be the reciprocal of avg_loss.
In the following chapters, we shows that the Accuracy actually is not correlated to
any of Accuracywin , Accuracy place and it is one of the main finding and difficulty in
this project.
9
Chapter 2
Data Preparation
Recent research in various areas has shown that neural networks works well on dif-
ferent fields attributed to its flexibility of in learning complex patterns. However,
this flexibility may lead to serious over-fitting issue and performance degradation
in small datasets in the case of memerization effects (Arpit et al., 2017) (e.g. more
parameters than training examples) when a model "memorizes" all training exam-
ples.
This chapter first illustrates the approach to collecting data and then describes the
datasets and corresponding database structures. After that, it provides careful anal-
ysis on significant features taken in our model. The last section shows the prepro-
cessing steps applied in this research.
Historical data are provided online by several commercial companies1 and HKJC
official website. A possible way to constructed our database is to purchase from
such companies. However, due to the financial conditions and the quality of these
1 hkhorsedb
10 Chapter 2. Data Preparation
Base on the characteristics on two datasets, we design two handy crawler accord-
ingly from: HKJC offical and Timeanddate. Historical data from 2011 to up-to-date
data are collected and maintained in MySQL server. Our crawling system can auto-
matically collect the latest racing and weather result and make it possible to for our
model to predict future races.
2.2 Datasets
The horse racing record dataset contains all racing data from 2011. Each row in the
dataset represents a record keeping information of a certain horse in a selected race.
The dataset contains 63459 records from 5029 races taken place in Hong Kong. The
following table describes the useful features that directly crawled from HKJC web-
site.
2.2. Datasets 11
Features generated after a race is abandoned since they cannot be collected before a race
Complete Categorical values are listed in Appendix
12 Chapter 2. Data Preparation
Apart from features directly crawled from HKJC website, we extracted advanta-
geous features imitating professional tips. Base on the initial measured data, we de-
rived 2 informative features, old place and weight difference, that explore the trend
between two consecutive races of a horse. Moreover, we explicitly created a feature
called "dn" to indicate whether the race is taken place during daytime or nighttime.
The following table provides descriptions for these features.
2.2.2 Weather
We obtained weather dataset from Timeanddate. The dataset is obtained from his-
torical data recorded in two observatory located near Shatin and Happy Valley race-
courses, containing 5029 weather information on every race in Horse Racing Record
database. Each row of weather dataset indexed by the racetime. The following table
illustrate information contained in the weather datasets.
The weather data is first taken into model inputs in this research and statistical anal-
ysis is conducted in following section.
2.3. Data Analysis 13
In this section, we will carefully examine the relationship between features in horse
racing dataset and horse performance.
Class
Class is a strutured feature manually created by HKJC by horse rating. There are 5
classes in addition to Group and Griffin races. In handicap races, the standard upper
rating limit for each class will be as follows:
14 Chapter 2. Data Preparation
1 120
2 100
3 80
4 60
5 40
The following figure from HKJC website illustrates the complete class system of
HKJC races. It is worth mentioning that our project only focus on handicapped
races among class 1 to 5.
The following table illustrates the correlations between class and horse performance.
When finishing time is normalized by distance, we can observe a trend that the
higher class of a horse, the quicker it finishes.
2.3. Data Analysis 15
Win odds
Win odds reflects the public intelligence on the expectation of horses in a single
races. In Pari-mutuel betting system, the larger the win pool of a horse, the lower of
win odds of a horse. The correlation between both finishtime and place shows that
public intelligence is wise enough to predict horses.
Although following public intelligence improves the model predictions, the betting
system by design reversely makes betting difficult: betting on the lowest rate will
result in negative net gain in statistical expectation. Therefore, enhancing model
prediction is necessary and challenging.
Weight
Most races are handicaps and more weights are added to the stronger runners in
order to equalize the chance of winning. The allocated weight is determined by the
horse recent performance (proportional to the horse rating assigned by HKJC).
Horse weight is weight of the horse itself. Actual weight is a extracted feature which
is the sum of carried weight and horse weight. Actual weight represents the real
weight of a horse.
The following correlation matrix portrays linearity between all weight features.
The table illustrates that neither of weight features is closely related to finishing
time or place. The statistics is quite convincing since the handicapped rules are de-
signed to adjust the horse power. Yet we take a deeper look into these features trying
to understand whether such features will help in predictions. We randomly select
two horses and analyze the correlation between weight features and their perfor-
mances.
TABLE 2.8: Weight Correlation Matrix of "A003"
We can conclude from the above table that weight features are working differently
with horse performance of individual horses. The horse "A003" may suffer from obe-
sity in this case and losing weight help it perform in a large scale. On the contrary,
the horse "L169" may be of well health and weight features have minor influence on
its performance.
Another possible explanation for the above matrices is that the horse "L169" is in a
less competitive class while the horse "A003" competes with strong opponents and
the change in weights influence "A003’s" performance greatly.
Weight difference
Weight difference is the difference of horse weight. The feature to some extend re-
flects the health conditions of a horse and in this part we try to identify the relation-
ship.
Combined with the analysis in the preceding sections, we believe that individual
weight difference influences the performance in different ways even though it shows
no relation with the overall performance.
18 Chapter 2. Data Preparation
Old place
Old place can be strictly defined as the place of a horse in its previous game. Con-
sistent performance relies heavily on the strength of the horse nature. Even though
horse racing is affected by a number of factors (for example, running with better
opponents), a horse with an excellent place in the previous race tends to runs con-
sistently as shown in the following correlation matrix. We claim that the nature of
horse is invariant to minor environment changes in most cases.
Weather conditions attributes to horse racing finishing time to some extent. In our
research, taking weather into account is shown to help enhance predicting perfor-
mance.
When it comes to overall performance of the horses, average finishing time varies in
different weather conditions. In common sense, horses tends to be longer in raining
days than in sunny days. Moreover, horses are prone to run faster under warmer
temperature. Empirical results on average finishing time against different weather
conditions show that common sense proves out to have high credibility.
2.3. Data Analysis 19
One possible explanation for the fluctuation in finish time is that, a change in weather
regardless of its form, may have subtle influence on horse nature.
On the one hand, the weather can indirectly determine horses performances by
slightly changing the course environment. For example, the condition of a race track
plays an important role in the performance of horses in a race. A slight flutuation
in humidity and raining can make a great impact in track surface density, porosity,
compaction and moisture. In a result, the horse tends to run in a different speed.
On the other hand, the rise in humidity and temperature can affect the health condi-
tion and further emotions of horses. Horses are astute on their surroundings. When
the environment changes quickly, they can easily get agitated and flee away. In gen-
eral belief, horses themselves are responsible for the minor change of environment
and the results differs naturally under different weather.
The following figure shows the correlations between the horses finishing and weather
20 Chapter 2. Data Preparation
conditions collected in our research. The finishing time is shown to have strong cor-
relations with some of weather features, such as temperature and humidity. How-
ever, relationship between other features is wanting to be discovered. Moreover,
individual performances with different weather is hard to show due to the large
quantity and it remains for our model to discover.
In accord with LYU1603, we split the datasets into two training and test sets. The
training set contains race records from 2011 to the end of 2014 and the test set con-
tains records between 2015 and 2016.
2.4. Data Preprocessing 21
X j − mean( X j )
Xnormalized,j =
std( X j )
In this research, we perform z-score normalization on real value data columns in our
datasets. The mean and standard deviation is calculated base on training set and
then apply to test set to avoid information leak.
Categorical data are challenging to train and always mask valuable information in
a dataset. It is crucial to represent the data correctly in order to locate most useful
features in the dataset and downgrade the model performance. Multiple encoding
scheme is available across the market. One of the most simple one is through one-
hot encoding in which data is encoded into a group of bits among which the legal
combinations of values are only those with a single high (1) bit and all the others low
(0) (Harris and Harris, 2010). However, one-hot encoding suffers from its high car-
dinally and the feature space can grow exponentially making it unfeasible to train.
Two useful APIs23 are provided to encode those data into sparse matrix in reason-
able space. categorical_column_with_hash_bucket allows users to distributes sparse
features in string or integer format into a finite number of buckets by hashing; cat-
egorical_column_with_vocabulary_list allows users to map inputs of similar format to
2 https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list
3 https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket
22 Chapter 2. Data Preparation
in-memory vocabulary mapping list from each value to an integer ID. Subsequently,
a multi-hot representation of categorical data is created by indicator_column or em-
bedded manually with assigned buckets by embedding_column.
Chapter 3
Model Architecture
In this chapter, we introduce our models in this final year project. The model is
designed to solve the regression in predicting finishing time of a horse using the
neural networks with 2 non-linear hidden layers. Optimization is performed by
back-propagation along gradients to minimize the mean square error(MSE) of pre-
dictions. (REFERENCE FROM ZHANG)
The key design of our model is an adaptation from traditional network classification
model. Instead of using an activation layer (typically logistic function or softmax
function) in classifier problems, our model treats hidden layers output as final out-
put and use the identity function as activation function. Therefore, it uses the mean-
square-error as the loss function, and the output is a set of continuous values.
3.1.1 Methodology
"Any class of statistical models can be termed a neural network if they use adap-
tive weights and can approximate non-linear functions of their inputs. Thus neural
network regression is suited to problems where a more traditional regression model
cannot fit a solution."
3.1.2 Configurations
For a DNN model, the first thing to decide is the structure, including the number
of layers and the batch size. In term of number of layers, we use 2 layers which is
commonly employed in DNNs.
The batch size of a model is the number of flow units in the model. We arbitrar-
ily choose to use the popular setting 128*128 in the end because theoretically it can
achieve a balance between performance and computational efficiency.
Then we need to decide the data frame for training and testing our models. In or-
der to be consistent and comparable with the previous teams, i.e. LYU1603 and
LYU1604, we split the data, use data from 2011 to 2014 to train and data from 2015
to 2016 to test the models.
Also the amount of training steps needs to be decided. A few experiments were
conducted on training steps of 10k, 100k and 1m to find out the relatively best steps,
which turned out to be 10k. It shows that the training model may be very easy to
overfit, thus more steps (>10k) would lead to a worse result.
As Table 3.1 shows, the models that trained 10k steps have an advantage over the
100k and 1m ones, so 10k is accepted to be part of the standard configuration to
conduct further experiments.
3.1. Deep Neural Network Regressor 25
The second criteria is the accuracy of predictions. Since the models themselves only
predict the finish time of each horse, we group the predictions by races and obtain
the winning horse of each race. Then the actual accuracy of the predictions can be
drawn.
The third criteria is the overall net gain after simulating the real bets over 2015-16.
Since it is a real world question, there is never a better way than evaluate a model
by putting all its predictions into the real scene.
27
Chapter 4
In this chapter, the result of all the experiments will be shown and interpreted from
more than one dimension. Also, through combining the models to each other, we are
searching for the best betting strategy that claims for the most net gain. A conclusion
will be drawn on basis of all the data.
The purpose of the experiments is to figure out which factors can really improve the
prediction. Since the DNN models are like black boxes that cannot be seen clearly
from outside, the experiments are essential to help understand the question.
The first factor of the experiments is the division of the data sets. There are 2 race-
courses in Hong Kong now. One in Sha Tin and one in Happy Valley. Races tak-
ing place in the two location are believed to be different. We wonder whether the
"divide-and-conquer" strategy can be applied here to help solve this question, so we
both train a model of the whole data set and train separated models of the subsets
grouped by different locations.
The second factor is the odds. The winning odds of each race can be retrieved and
fed to the models, and by intuition is closely related to the winning horse. How-
ever team LYU1603 finally decided not to use this feature. To make this clear, both
models with and without the "winodds" feature will be trained and compared in the
experiments.
The third factor is the weather information. Historical weather information of the
racecourses are grabbed from web in advance, including features like temperature,
wind speed and humidity. It is not clear that to what extent these data would help
28 Chapter 4. Experimental Results and Discussion
improve the prediction, so models will be train with and without these data sepa-
rately.
To sum up, 3 binary factors are presented, thus 8 models are trained correspondingly
in the experiments.
Notation
In order to keep the table neat and readable, a special notation is used here, which
uses three binary digits to represent the model. For example, "Model 000" means
the model is NOT divided by location, NOT including "winodds" nor "weather" in
the feature, while "Model 110" means the model is divided by location, including
"winodds" but excluding "weather". Also, for those models starting with "1", since
each of them they involves 2 sub-models, the first value refers to the data of Sha Tin
and the second refers to those of Happy Valley.
Models Model 000 Model 001 Model 010 Model011 Model100 Model 101 Model 110 Model 111
Loss 515.2 461.2 556.4 417.7 583/575 527 / 536 629 / 577 652 / 589
Accuracywin 0.08367 0.07029 0.08090 0.10742 0.08355 0.07560 0.08488 0.07560
Accuracy place 0.08753 0.10031 0.09063 0.09461 0.09284 0.09461 0.09372 0.09991
Net gain -1087 -991 -1378 -568 37/-1005 -1088/-1579 655/ -917 339/-1724
Model 011 has the best loss when predicting the finishing time of each horse. For
most of the models, including weather as a feather leads to a significant decrease in
loss. However, including win odds does not improve the result; in contrast, 3 of the
4 comparisons show a increase in loss. In terms of the division of data set, basically
the divided models perform worse than the corresponding undivided models.
While concerning Accuracywin , Model 011 wins again. However, after comparing
accordingly, most models show the pattern that weather does not help improve the
accuracy this time. Similarly, it seems that both win odds and the division of data
sets do not make an obvious difference as well, since most of the models remains
essentially unchanged.
4.2. Experimental Results 29
If we bet completely accordingly to the predictions’ suggestions, we will get the net
gain shown above. Unfortunately, all of the models lose money over the whole year.
However, Model 110 loses the least money among the eight. Meanwhile, the models
from different groups show different patterns. To the models using the undivided
data set, the weather data has a positive impact on the prediction, while to the di-
vided ones it is mostly negative. Also, if the weather data is excluded, dividing the
data sets is significantly better than the opposite. Moreover, the bets on the races in
Sha Tin gain much more money than those on the races in Happy Valley.
There are a few possible reasons for some of the above phenomenon which are anti-
intuitive. First, after dividing the data set into 2 subsets, the data in each set may
not be abundant enough for the model with lower loss to be trained out. Second,
though theoretically adding feature should be helpful to improve prediction accu-
racy, it does not work out as expected, possibly because this involves a lot of ran-
domness. Last, due to some mysterious unknown reasons, betting on the races in
Sha Tin is better than on those in Happy Valley, which can be a useful hint for fur-
ther studies.
In this part we will show the detailed betting logs through the whole year of each
model.
30 Chapter 4. Experimental Results and Discussion
This figure shows the net gain changes of different models of the races in HV.
This figure shows the net gain changes of different models of the races in ST.
4.2. Experimental Results 31
This figure shows the net gain changes of different models of all the races.
Summary
The results of each models are shown and some primitive conclusions can be drawn.
First, the connections among loss, accuracy and net gain are weak, which means
pursuing a lower loss or a higher accuracy, just as people usually do in machine
learning and data mining, does not really work on this topic. Instead of that, net gain
need to be calculated accordingly to justify whether a model is good or not. Also,
predicting horse racing results is a real-life question that involves much randomness.
As a result, some of the ways that should theoretically improve the models do not
work as well as expected. Moreover, betting without any consideration or filtering
is stupid, and leads to an unsatisfactory outcome, so we are going to try some other
ways to improve it.
To make full use of the models and to generate more reliable predictions, a set of
combinations of the models is tested. The basic assumption is that, if more than one
model conform to each other, this piece of prediction is more reliable and worthy of
betting. The following shows the results of the experiments.
32 Chapter 4. Experimental Results and Discussion
As we can observe from the above graphs, the combined models show a good po-
tential in improving the prediction. Overall, the net gain of each set of models rises
significantly, and this effect works better in Sha Tin. The best combination so far,
combining odds-weather and odds-noweather in Sha Tin, once reached the peak
around 1500 HKD of net gain. However as a cost of that, the betting frequency is
lowered significantly. For example, for the best combination in Figure 4.5 (odds-
weather, odds-noweather and noodds-weather), the combined model only bet 415
times, claiming 55% of the whole number of races.
By examining the prediction provided by the models, we found that sometimes the
finishing time they predict has a huge difference. We assume that these cases are
abnormal and if the difference is too large, the bets will not be placed.
The following graphs show the net gain of models trained by the same scheme as
above, but applying the strategy that if the difference between the predictions made
by different models is higher than 5 seconds, this race will be dropped and no bets
will be made.
F IGURE 4.13: Net gain of 1 model from divided data set (HV)
38 Chapter 4. Experimental Results and Discussion
F IGURE 4.14: Net gain of 1 model from divided data set (ST)
The graphs above show that after applying this strategy, the net gain is much more
stable. Some of the models can earn money rather steadily. Meanwhile, the draw-
back of this method is also clear: too few bets are placed. For the combination of 4
models, no bets is placed for the whole year.
Just as what we have found previously, the connections between loss, accuracy and
net gain are weaker than we expected. However, the best number of training steps
was decided simply by the loss of the models, which is no longer solid. As a result
of that, we re-conducted some experiments in chapter 3, to check if the configura-
tion can be changed for better. In this part, the experiments are basically the same
as the above ones, except the number of training steps is changed to 100k and 1
million.
44 Chapter 4. Experimental Results and Discussion
F IGURE 4.25: Net gain of 1 models from undivided data set (100k
steps
F IGURE 4.26: Net gain of 1 models from undivided data set (1m steps
4.2. Experimental Results 45
By comparing models sharing the same feature configuration (including those from
the previous section), it can be concluded that, although they are overfitting in terms
of loss, 100k and 1m training steps perform even slightly better than 10k training
steps with net gains.
F IGURE 4.33
Comparing with LYU1603, we can tell from the above figure that both Accuracywin
and Accuracy are not high enough to obtain positive bet gain.
4.3 Discussion
To interpret the data and figures of the experimental results, we offer some anal-
ysis and discussion in this section. The discussion is base on our understandings
on horse racing predictions through this semester and will determine our research
directions in the future. To facilitate our analysis, we may reuse some of the above
4.3. Discussion 49
Model accuracy on finishing time prediction does not guarantee Accuracywin and
Accuracy place . While the DNN models are trained to obtain the lowest loss in terms
of finishing time of the horses, this does not directly relate to Accuracywin and Accuracy place .
It leads to a dilemma where models with best loss may still fail in real life.
Training with more features help decrease the test loss but in contrast Accuracywin
and Accuracy place may drop. Because of the similar reasons, there is a gap between
the loss and the actual bet accuracy. It implies that the models cannot be justified by
loss provided by Tensorflow; it requires more evaluation like accuracy and net gain.
Training more than 10k steps overfits the data sets but Accuracywin and Accuracy place
tends to be higher. We found with surprise that, although the models trained
with more steps seem to be overfitting, they generally have a better performance
in Accuracywin and Accuracy place .
General trends in finishing time matters in horse racing results prediction. Since
our model predicts the horse finishing time individually, the predicted finishing time
within a race ranges from 1-10.
To better illustrate the issues, the following figure provides the range of horse fin-
ishing time with a race. We claims that races with large min-max finishing time are
badly-distributed races and the others are normal races.
50 Chapter 4. Experimental Results and Discussion
Intuitively, Accuracywin and Accuracy place of normal races should outnumber that of
badly-distributed races. However, our research shows that Accuracywin and Accuracy place
of two kind of races are similar in scale.
Combination of models and Strategies conditions help in horse racing results pre-
diction. We apply two approaches to address the issue and find out both of them
are useful in improving Accuracywin and Accuracy place .
One approach is to combine models and bet only with high confidence, this ap-
proach allow model to "communicate" the finishing time trends to bet on races with
high confidence.
Another approach is to bet with strategy. Although we identify that min-max finish-
ing time within a race has little improvement in Accuracywin and Accuracy place , com-
bining another strategies focusing on time difference on the first two horses results
in a surge in both Accuracy. By considering the difference of the first two horses, we
hence strictly define what races are regarded as "normal" and the results meet with
our understandings.
4.3. Discussion 51
Chapter 5
5.1 Conclusion
This report focuses mostly on discovering the inner-relation among the features, the
number of training steps and the predictions. Though a set of experiments, it is
clear that predicting horse racing with machine learning is different from most other
ML questions in terms of the evaluation of models. Other than the normal "loss"
in Tensorflow, the derived prediction accuracy and net gain are also important to
evaluate a model. We basically examined three factors, data set division, win odds
and weather, and it turned out that races in Sha Tin are significantly more predictable
than those in Happy Valley. However, win odds and weather do not show a clear
correlation with the final result. We also combined more than one model together
and other strategies and got relatively better results, which means this is a correct
way to help solve this question.
In the next semester, we will take a deeper look into the representing following
trends of finishing time. A possible way is to group up a race and prediction all
finishing time at a time. Another attempt is instead training the average finishing
time of a race and design a new loss function to regularize predictions.
Another interesting direction is to train the models on each horses and try to ap-
proximate the horse characteristics individually. Either directions discussed is base
on our observation of the trends in finishing time. It is believed that once we can
train a more accurate finishing time within a race, the higher chance of the trends
can be approximated.
55
Bibliography
Arpit, Devansh et al. (2017). “A closer look at memorization in deep networks”. In:
arXiv preprint arXiv:1706.05394.
Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton (2013). “Speech recog-
nition with deep recurrent neural networks”. In: Acoustics, speech and signal process-
ing (icassp), 2013 ieee international conference on. IEEE, pp. 6645–6649.
Harris, David and Sarah Harris (2010). Digital design and computer architecture. Mor-
gan Kaufmann.
He, Kaiming et al. (2015). “Deep Residual Learning for Image Recognition”. In: pp. 1
–12. URL: https://arxiv.org/pdf/1512.03385.pdf.
– (2016). “Identity mappings in deep residual networks”. In: European Conference on
Computer Vision. Springer, pp. 630–645.
Huang, Gao et al. (2016). “Densely Connected Convolutional Networks”. In: URL:
https://arxiv.org/pdf/1608.06993.pdf.
Kim, Yoon (2014). “Convolutional Neural Networks for Sentence Classification”. In:
URL : http://www.aclweb.org/anthology/D14-1181.
Lange, Sascha and Martin Riedmiller (2010). “Deep Auto-Encoder Neural Networks
in Reinforcement Learning”. In: URL: http : / / ieeexplore . ieee . org / stamp /
stamp.jsp?arnumber=5596468.
LeCun, Yann et al. (1998). “Gradient-Based Learning Applied to Document Recog-
nition”. In: PROC. OF THE IEEE, pp. 1 –46. URL: http://yann.lecun.com/exdb/
publis/pdf/lecun-01a.pdf.
Méhat, Jean and Tristan Cazenave (2011). “A parallel general game player”. In: KI-
künstliche Intelligenz 25.1, pp. 43–47.
Ouyang, Xi et al. (2015). “Sentiment Analysis Using Convolutional Neural Network”.
In: URL: http://ieeexplore.ieee.org/document/7363395/.
Ren, Shaoqing et al. (2015). “Faster R-CNN: Towards Real-Time Object Detection-
with Region Proposal Networks”. In: URL: https : / / papers . nips . cc / paper /
5638- faster- r- cnn- towards- real- time- object- detection- with- region-
proposal-networks.pdf.
Riess, Steven A (1991). City games: The evolution of American urban society and the rise
of sports. Vol. 114. University of Illinois Press.
56 BIBLIOGRAPHY
Santos, Cicero Nogueira dos and Maira Gatti (2014). “Deep Convolutional Neural
Networks for Sentiment Analysis of Short Texts”. In: URL: http://www.aclweb.
org/anthology/C14-1008.
Silver, David et al. (2016). “Mastering the game of Go with deep neural networks
and tree search”. In: Nature 529.7587, pp. 484–489.
Snyder, Wayne (1978a). “Decision-making with risk and uncertainty: The case of
horse racing”. In: The American Journal of Psychology, pp. 201–209.
Snyder, Wayne W (1978b). “Horse racing: Testing the efficient markets model”. In:
The Journal of finance 33.4, pp. 1109–1118.
Sola, J and J Sevilla (1997). “Importance of input data normalization for the applica-
tion of neural networks to complex industrial problems”. In: IEEE Transactions on
Nuclear Science 44.3, pp. 1464–1468.
Srivastava, Nitish et al. (1989). “Dropout: A Simple Way to Prevent Neural Networks
from Overfitting”. In: Neural Networks 2.15, pp. 359 –366. URL: https://www.cs.
cmu.edu/~epxing/Class/10715/reading/Kornick_et_al.pdf.
– (2014). “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”.
In: Journal of Machine Learning Research 15.15, pp. 1929 –1958. URL: https://www.
cs.toronto.edu/~hinton/absps/JMLRdropout.pdf.
Srivastava, Rupesh Kumar, Klaus Greff, and Jurgen Schmidhuber (2015). “Deep Resid-
ual Learning for Image Recognition”. In: pp. 1 –6. URL: https://arxiv.org/abs/
1505.00387.
Szegedy, Christian, Alexander Toshev, and Dumitru Erhan (2016). “Deep Neural
Networks for Object Detection”. In: URL: https://papers.nips.cc/paper/5207-
deep-neural-networks-for-object-detection.pdf.
Tung, Cheng Tsz and Lau Ming Hei (2016). “Predicting Horse Racing Result Using
TensorFlow”. In: URL: http://www.cse.cuhk.edu.hk/lyu/_media/students/
lyu1603_term_1_report.pdf?id=students%3Afyp&cache=cache.
Vanhoucke, Vincent, Andrew Senior, and Mark Z. Mao (2010). “Improving the speed
of neural networks on CPUs”. In: URL: https : / / static . googleusercontent .
com/media/research.google.com/zh-CN//pubs/archive/37631.pdf.
Widrow, Bernard and Marcian Hoff (1959). “Adaptive Switching Circuits”. In: URL:
http : / / www - isl . stanford . edu / ~widrow / papers / c1960adaptiveswitching .
pdf.
Williams, Janett and Yan Li (2008). “A case study using neural networks algorithms:
horse racing predictions in Jamaica”. In: Proceedings of the International Conference
on Artificial Intelligence (ICAI 2008). CSREA Press, pp. 16–22.
Zhang, Xiang, Junbo Zhao, and Yann LeCun (2015). “Character-level Convolutional
Networks for Text Classification”. In: URL: https://arxiv.org/pdf/1509.01626.
pdf.