0% found this document useful (0 votes)
525 views29 pages

The Alpha Engine Designing An Automated Trading Algorithm

The document introduces a new approach to algorithmic investment and automated trading strategies. It was developed based on principles from decades of research, including redefining time in financial markets based on intrinsic events and uncovering scaling laws. The trading model is built using an agent-based framework inspired by complex systems. It aims to generate profits while also providing market liquidity without restrictions on asset size. The document focuses on developing the trading algorithm within the foreign exchange market due to its high liquidity and other beneficial properties for research.

Uploaded by

doc_oz3298
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
525 views29 pages

The Alpha Engine Designing An Automated Trading Algorithm

The document introduces a new approach to algorithmic investment and automated trading strategies. It was developed based on principles from decades of research, including redefining time in financial markets based on intrinsic events and uncovering scaling laws. The trading model is built using an agent-based framework inspired by complex systems. It aims to generate profits while also providing market liquidity without restrictions on asset size. The document focuses on developing the trading algorithm within the foreign exchange market due to its high liquidity and other beneficial properties for research.

Uploaded by

doc_oz3298
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

The Alpha Engine: Designing an Automated Trading

Algorithm
Anton Golub1 , James B. Glattfelder2 , and Richard B. Olsen1
1
Lykke Corp, Baarerstrasse 2, 6300 Zug Switzerland
2
Department of Banking and Finance, University of Zurich, Switzerland

April 5, 2017

Abstract
We introduce a new approach to algorithmic investment management that
yields profitable automated trading strategies. This trading model design is the
result of a path of investigation that was chosen nearly three decades ago. Back
then, a paradigm change was proposed for the way time is defined in financial
markets, based on intrinsic events. This definition lead to the uncovering of a
large set of scaling laws. An additional guiding principle was found by embed-
ding the trading model construction in an agent-base framework, inspired by
the study of complex systems. This new approach to designing automated trad-
ing algorithms is a parsimonious method for building a new type of investment
strategy that not only generates profits, but also provides liquidity to financial
markets and does not have a priori restrictions on the amount of assets that are
managed.

1 Introduction
The asset management industry is one of the largest industries in modern so-
ciety. Its relevance is documented by the astonishing amount of assets that
are managed. It is estimated that globally there are 64 trillion USD under
management [6]. This is nearly as big as the world product of 77 trillion USD
[39].

1.1 Asset Management


Asset managers use a mix of analytic methods to manage their funds. They
combine different approaches from fundamental to technical analysis. The time
frames range from intraday, to days and weeks, and even months. Technical
analysis, a phenomenological approach, is utilized widely as a toolkit to build
trading strategies.

1
A drawback of all such methodologies is, however, the absence of a consistent
and overarching framework. What appears as a systematic approach to asset
management often boils down to gut feeling, as the manager chooses from a
broad blend of theories with different interpretations. For instance, the choice
and configuration of indicators is subject to the specific preference of the analyst
or trader. In effect, practitioners mostly apply ad hoc rules which are not
embedded in a broader context. Complex phenomena such as changing liquidity
levels as a function of time go unattended.
This lack of consensus, or intellectual coherence, in such a dominant and rel-
evant industry underpinning our whole society is striking. Especially in a day
and age where computational power and digital storage capacities are grow-
ing exponentially, at shrinking costs, and where there exists an abundance of
machine learning algorithms and big data techniques. To illustrate, consider
the recent unexpected success of Googles AlphaGo algorithm beating the best
human players [11]. This is a remarkable feat for a computer, as the game of Go
is notoriously complex and players often report that they select moves based
solely on intuition.
There is, however, one exception in the asset management and trading in-
dustry that relies fully on algorithmic trade generation and automated execu-
tion. Referred to under the umbrella of term high-frequency trading, this
approach has witnessed substantial growth. These strategies take advantage of
short-term arbitrage opportunities and typically analyse the limit order books
to jump the queue, whenever there are large orders pending [10]. While high-
frequency trading results in high trade volumes the assets managed with these
type of strategies are around 140 billion [34]. This is microscopic compared to
the size of the global assets under management.

1.2 The Foreign Exchange Market


For the development of our trading model algorithm, and the evaluation of the
statistical price properties, we focus on the foreign exchange market. This mar-
ket can be characterized as a complex network consisting of interacting agents:
corporations, institutional and retail traders, and brokers trading through mar-
ket makers, who themselves form an intricate web of interdependence. With
an average daily turnover of approximately five trillion USD [8] and with price
changes nearly every second, the foreign exchange market offers a unique op-
portunity to analyze the functioning of a highly liquid, over-the-counter market
that is not constrained by specific exchange-based rules. These markets are an
order of magnitude bigger than futures or equity markets [24].
In contrast to other financial markets, where asset prices are quoted in refer-
ence to specific currencies, exchange rates are symmetric: quotes are currencies
in reference to other currencies. The symmetry of one currency against another
neutralizes effects of trend, which are a significant drivers in other markets,
such as stock markets. This property of symmetry makes currency markets
notoriously hard to trade profitably.
We focus on the foreign exchange market for the development of our trading
model algorithm. Its high liquidity and long/short symmetry make it an ideal

2
environment for the research and development of fully automated and algorith-
mic trading strategies. Indeed, any profitable trading algorithm for this market
should, in theory, also be applicable to other markets.

1.3 The Rewards and Challenges of Automated Trading


During the crisis of 2007 and 2008, the world witnessed how the financial sys-
tem destabilized the real economy and destroy vast amounts of wealth. At
other times, when there are favourable economic conditions, financial markets
contribute to wealth accumulation. The financial system is an integral part of
the real economy with a strong cross dependency. Markets are not a closed
system, where the sum of all profits and losses net out. If investment strategies
contribute to market liquidity, they can help stabilize prices and reduce the
uncertainty in financial markets and the economy at large. For such strategies
the investment returns can be viewed as a payoff for the value-added provided
to the economy.
Liquid financial markets offer a large profit potential. The length of a foreign
exchange price curve, as measured by the sum of up and down price movements
of increments of 0.05%, during the course of a year, is, on average, approx-
imately 1600%, after deducting transaction costs [16]. An investor can, in
theory, earn 1600% percent unleveraged per year, assuming perfect foresight
in exploiting this coastline length. With leverage, the profit potential is even
greater. Obviously, as no investor has perfect foresight, capturing 1600% is not
feasible.
However, why do most investment managers have such difficulty in earning
even small returns on a systematic basis, if the profit potential is so big? Espe-
cially as traders can manage their risk with sophisticated money management
rules which have the potential to turn losses into profits. Again, the question
arises as to why even hedge funds, who can hire the best talent in the world,
find it so hard to earn consistent annual returns. For instance, the Barclay
Hedge Fund Index1 , measuring the average returns of all hedge funds (except
funds of funds) in their database, reports an average yearly return of 5.035%
( 4.752%) for the past four years. How can we develop liquidity-providing
investment algorithms that consistently generate positive and sizable returns?
What is missing in the current paradigm?
Another key criterion of the quality of an investment strategy is the size of
assets that can be deployed without a deterioration of performance. Closely
related to this issue is the requirement that the strategy does not distort the
market dynamics. This is for example the case with the trend following strate-
gies that are often deployed in automated trading. Such strategies have the
disadvantage that the investor does not know for sure how his action of follow-
ing the trend amplifies the trend. In effect, the trend follower can get locked
into a position that he cannot closeout without triggering a price dislocation.
Finally, any flavour of automated trading is constrained by the current com-
putational capacities available to researchers. Although this constraint is loos-
ening day by day, due to the prowess of high performance computing in finance,
1
www.barclayhedge.com/research/indices/ghs/Hedge_Fund_Index.html.

3
some approaches rely more on number crunching than others. Ideally, any trad-
ing model algorithm should be implementable with reasonable resources to make
it useful and applicable in the real world.

1.4 The Hallmarks of Profitable Trading


Investment strategies need to be fully automated. For one, the number of
traded instruments should not be constrained by human limitations. Then, the
trading horizons should also include intraday activity, as a condition sine qua
non. Complete automation has its own challenges, because computer code can
go awry and cause huge damage, as witnessed by Knight Capital, which lost
500 million USD in a matter of 30 seconds due to an operational error2 .
Many modelling attempts fail, because developers succumb to curve fitting.
They start with a specific data sample and tweak their model until it makes
money in simulation runs. Such trading models can disappoint from the start
when going live or boast good performance for some period of time until a
regime shift occurs and the profitable conditions the model was optimized for
disappear.
Trading models need to be parsimonious and have a limited set of variables.
If the models have too many variables, the parameter space becomes vast and
hard to navigate. Parsimonious models are powerful, because they are easier
to calibrate, assess, and understand why they perform. Moreover, investment
models need to be robust to market changes. For instance, the models can
be adaptive and have their behavior depend on the current market regime.
Therefore, algorithmic investment strategies have to be developed on the basis
of robust and consistent approaches and methods that provide a solid framework
of analysis.
Financial markets are comprised of a large number of traders that take
positions on different time horizons. Agent-based models can mimic the ac-
tual traders and are therefore well suited to research market behavior [14]. If
agent-based models are fractal, i.e., behave in a self-similar manner across time
horizons and only differ with respect to the scaling of their parameters, the
short-term models are a filter for the validity of the long-term models. In prac-
tice, this allows for the short-term agent-based models to be tested and validated
over a huge data sample with a multitude of events. As a result, the scarcity
of data available for the long-term model is not a hindrance of acceptance if it
is self-similar with respect to the short-term models. In effect, the validation
of the model structure for short-term models implies also a validation for the
long-term models, by virtue of the scaling effects. In contrast, most standard
modelling approaches are typically devised for one time horizon only and hence
there are no self-similar models that complement each other.
Moreover, the modeling approach should be modular and enable developers
to combine smaller blocks to build bigger components. In other words, models
are built in a bottom up spirit, where simple building blocks are assembled into
more complex units. This also implies an information flow between building
blocks.
2
www.sec.gov/news/press-release/2013-222.

4
To summarize, our aim is to develop trading models based on parsimo-
nious, self-similar, modular, and agent-based behavior, designed for multiple
time horizons and not purely driven by trend following action. The intellectual
framework unifying these angles of attack is outlined in Section 3. The result
of this endeavor are interacting systems that are highly dynamic, robust, and
adaptive. In other words, a type of trading model that mirrors the dynamic
and complex nature of financial markets. The performance of this automated
trading algorithm is outlined in the next section.
In closing, it should be mentioned that transaction costs can represent real-
world stumbling blocks for trading models. Investment strategies that take
advantage of short-term price movements in order to achieve good performance
have higher transaction volumes than longer-term strategies. This obviously
increases the impact of transaction costs on the profitability. As far as possible,
it is advisable to use limit orders to initiate trades. They have the advantage
that the trader does not have to cross the spread to get his order executed, thus
reducing or eliminating transaction costs. The disadvantage of limit orders is,
however, that execution is uncertain and depends on buy and sell interest.

2 In a Nutshell: Trading Model Anatomy and


Performance
In this section we provide an overview of the trading model algorithm and its
performance. For all the details on the model, see Section 4 and the code that
can be download from GitHub [35].
The Alpha Engine is a counter-trending trading model algorithm that pro-
vides liquidity by opening a position when markets overshoot, and manages
positions by cascading and de-cascading during the evolution of the long coast-
line of prices, until it closes in a profit. The building blocks of the trading model
are:
an endogenous time scale called intrinsic time that dissects the price curve
into directional changes and overshoots;
patterns, called scaling laws that hold over several orders of magnitude,
providing an analytical relationship between price overshoots and direc-
tional change reversals;
coastline trading agents operating at intrinsic events, defined by the event
based language;
a probability indicator that determines the sizing of positions, by identi-
fying periods of market activity that deviate from normal behavior;
skewing of cascading and de-cascading designed to mitigate the accumu-
lation of large inventory sizes during trending markets;
the splitting of directional change and, consequently, overshoot thresh-
olds into upwards and downwards components, i.e., the introduction of
asymmetric thresholds.

5
20%
15%
Total P&L

10%
5%
0%

2006 2008 2010 2012 2014

time NZD/JPY
AUD/NZD
NZD/CAD
3%

EUR/NZD
NZD/USD
AUD/JPY
GBP/AUD
CAD/JPY
EUR/AUD
CHF/JPY
GBP/CAD
GBP/JPY
2%

EUR/CAD
USD/CAD
AUD/USD
USD/CHF
P&L

EUR/USD
USD/JPY
EUR/GBP
EUR/CHF
GBP/USD
1%

EUR/JPY
GBP/CHF
0%

2006 2008 2010 2012 2014

Figure 1: Daily Profit & Loss of the Alpha Engine, across 23 currency pairs, for
eight years. See details in the main text of this section and Section 4.

The trading model is back-tested on historical data comprised of 23 exchange


rates:
AUD/JPY, AUD/NZD, AUD/USD, CAD/JPY, CHF/JPY, EUR/AUD,
EUR/CAD, EUR/CHF, EUR/GBP, EUR/JPY, EUR/NZD, EUR/USD,
GBP/AUD, GBP/CAD, GBP/CHF, GBP/JPY, GBP/USD, NZD/CAD,
NZD/JPY, NZD/USD, USD/CAD, USD/CHF, USD/JPY.
The chosen time period is from the beginning of 2006 until the beginning of 2014,
i.e., eight years. The trading model yields an un-levered return of 21.3401%,
with an annual Sharp ratio of 3.06, and a maximum drawdown (computed
on daily basis) of 0.7079%. This event occurs at the beginning of 2013 and
lasts approximately 4 months, as the JYP weakens significantly following the
Quantitative Easing programme (three arrows of fiscal stimulus) launched by
the Bank of Japan.
Figure 1 shows the performance of the trading model across all exchange
rates. Table B, in Appendix B, reports the monthly and yearly returns. The
difference in returns among the various exchange rates is explained by volatility:
the trading model reacts only to occurrences of intrinsic time events, which are
functionally dependent on volatility. Exchange rates with higher volatility will

6
have a greater number of intrinsic events and hence more opportunities for the
model to extract profits from the market. This behavior can be witnessed during
the financial crisis, where its deleterious effects are somewhat counterbalanced
by an overall increase in profitable trading behavior of the model, fueled by the
increase in volatility.
The variability in performance of the individual currency pairs can be ad-
dressed by calibrating the aggressiveness of the model with respect to the
volatility of the exchange rate. In other words, the model trades more fre-
quently when the volatility is low, and vice versa. For the sake of simplicity,
and to avoid potential over-fitting, we have excluded these adjustments to the
model. In addition, we also refrained from implementing cross-correlation mea-
sures. By assessing the behavior of the model for one currency pair, information
can be gained that could be utilized as an indicator which affects the models
behaviour for other exchange rates. Finally, we have also not implemented any
risk management tools.
In essence, what we present here is a proof of concept. We refrained from
tweaking the model to yield better performance, in order to clearly establish
and outline the models building blocks and fundamental behavior. We strongly
believe there is great potential for obvious and straightforward improvements,
which would give rise to far better models. Nevertheless, the bare-bones model
we present here already has the capability of being implemented as a robust and
profitable trading model that can be run in real-time. With a leverage factor
of 10, the model experiences a drawdown of 7.08% while yielding an average
yearly profit of 10.05% for the last four years. This is still far from realizing
the coastlines potential, but, in our opinion, a crucial first step in the right
direction.
Finally, we conclude this section by noting that, despite conventional wis-
dom, it is in fact possible to beat a random walk. The Alpha Engine produces
profitable results even on time series generated by a random walk, as seen in
Figure 9 in Appendix B. This unexpected feature results from the fact that the
model is dissecting Brownian motion into intrinsic time events. Now these di-
rectional changes and overshoots yield a novel context, where a cascading event
is more likely to be followed by a de-cascading event than another cascading
one. In detail, the probability of reaching the profitable de-cascading event after
a cascade is 1 e1 0.63, while the the probability for an additional cascade
is about 0.37. In effect, the procedure of translating a tick-by-tick time series
into intrinsic time events skews the odds in ones favourfor empirical as well
as synthetic time series. For details see [19].
In the next section, we will embark on the journey that would ultimately
result in the trading model described above. For a prehistory of events, see
Appendix A.

3 Guided by an Event-Based Framework


The trading model algorithm outlined in the last section is the result of a
long journey that began in the early 1980s. Starting with a new conceptual

7
framework of time, this voyage set out to chart new terrain. The whole history
of this endeavor is described in Appendix A. In the following, the key elements
of this new paradigm are highlighted.

3.1 The First Step: Intrinsic Time


We all experience time as a fundamental and unshakable part of reality. In stark
contrast, the philosophy of time and the notion of time in fundamental physics
challenges our mundane perception of it. In an operational definition, time is
simply what instruments measure and register. In this vein, we understand the
passage of time in financial time series as a set of events, i.e., system interactions.
In this novel time ontology, time ceases to exist between events. In contrast
to the continuity of physical time, now only interactions, or events, let a systems
clock tick. Hence this new methodology is called intrinsic time [28]. This
event-based approach opens the door to a modelling framework that yields self-
referential behavior which does not rely on static building blocks and has a
dynamic frame of reference.
Implicit in this definition is the threshold for the measurement of events.
At different resolutions the same price series reveals different characteristics. In
essence, intrinsic time increases the signal to noise ration in a time series by
filtering out the irrelevant information between events. This dissection of price
curves into events is an operator, mapping a time series x(t) into a discrete set
of events [x(t), ], given the directional change threshold .
We focus on two types of events that represent ticks of intrinsic time:
1. a directional change [21, 16, 3, 5, 7];
2. an overshoot [16, 3, 7].
With these events, every price curve can be dissected into components that
represent a change in the price trend (directional change) and a trend com-
ponent (overshoot). For a directional change to be detected, first an initial
direction mode needs to be chosen. As an example, in an up mode an increas-
ing price move will result in the extremal price being updated and continuously
increased. If the price goes down, the difference between the extremal price and
the current price is evaluated. If this distance (in percent) exceeds the prede-
fined directional change threshold, a directional change is registered. Now the
mode is switched to down and the algorithm continues correspondingly. If now
the price continues to move in the same direction as the directional change, for
the size of the threshold, an overshoot event is registered. As long as a trend
persists, overshoot events will be registered. See the left-hand panel in Figure
2 for an illustration. Note that two intrinsic time series will synchronize after
one directional change, regardless of the chosen starting direction.
As a result, a price curve is now comprised of segments, made up of a direc-
tional change event and one or more overshoots of size . This event-based
time series is called the coastline, defined for a specific directional change thresh-
old. By measuring the various coastlines for an array of thresholds, multiple
levels of event activity can be considered. See the right-hand panel in Figure 2
and Figure 3. This transformed time series is now the raw material for further

8
Figure 2: (Left) directional-change and overshoot events. (Right) a coastline repre-
sentation of the EUR USD price curve (2008-12-14 22:10:56 to 2008-12-16 21:58:20)
defined by a directional-change threshold = 0.25%. The blue triangles represent
directional-change and the green bullets overshoot events.

investigations [16]. In particular, this price curve will be used as input for the
trading model, as described in Section 3.4. With the publication [15], the first
decade came to a close.

3.2 The Emergence of Scaling Laws


A validation for the introduction of intrinsic time is that this event-based frame-
work uncovers statistical properties otherwise not detectable in the price curves,
for instance, scaling laws. Scaling-law relations characterize an immense num-
ber of natural processes, prominently in the form of
1. scaling-law distributions;
2. scale-free networks;
3. cumulative relations of stochastic processes.
Scaling-law relations display scale invariance because scaling the functions ar-
gument x preserves the shape of the function f (x) [27]. Measurements of
scaling-law processes yield values distributed across an enormous dynamic range,
and for any section analysed, the proportion of small to large events stays con-
stant.
Scaling-law distributions have been observed in an extraordinary wide range
of natural phenomena: from physics, biology, earth and planetary sciences,
economics and finance, computer science and demography to the social sciences
[32, 40, 37, 31]. Although scaling-law distributions imply that small occurrences
are extremely common, whereas large instances are rare, these large events

9
Figure 3: Coastline representation of a price curve for various directional-change
thresholds .

occur nevertheless much more frequently compared to a normal probability


distribution. Hence scaling-law distributions are said to have fat tails.
The discovery of scale-free networks [9, 1], where the degree distributions
of nodes follow a scaling-law distribution, was a seminal finding advancing the
study of complex networks [30]. Scale-free networks are characterized by high
robustness against random failure of nodes, but susceptible to coordinated at-
tacks on the hubs.
Scaling-law relations also appear in collections of random variables. Promi-
nent empirical examples are financial time-series, where one finds scaling laws
governing the relationship between various observed quantities [29, 21, 15]. The
introduction of the event-based framework lead to the discovery of a series of
new scaling relations in the cumulative relations of properties in foreign ex-
change time-series [16]. In detail, of the 18 novel scaling-law relations (of which
12 are independent), 11 relate to directional changes and overshoots.
One notable observation was that, on average, a directional change is
followed by an overshoot of the same magnitude

hi . (1)

This justifies the procedure of dissecting the price curve into directional-change
and overshoot segments of the same size, as seen in Figures 2 and 3. In other
words, the notion of the coastline is statistically validated.
Scaling laws are a hallmark of complexity and complex systems. They can
be viewed as a universal law of nature underlying complex behavior in all its

10
domains.

3.3 Trading Models and Complexity


A complex system is understood as being comprised of many interacting or in-
terconnected parts. A characteristic feature of such systems is that the whole
often exhibits properties not obvious from the properties of the individual parts.
This is called emergence. In other words, a key issue is how the macro behav-
ior emerges from the interactions of the systems elements at the micro level.
Moreover, complex systems also exhibit a high level of resilience, adaptability,
and self-organization. The domains complex systems originate from are mostly
socio-economical, biological or physio-chemical.
Complex systems are usually very reluctant to be cast into closed-form ana-
lytical expressions. This means that it is generally hard to derive mathematical
quantities describing the properties and dynamics of the system under study.
Nonetheless, there has been a long history of attempting to understand finance
from an analytical point of view [36, 23].
In contrast, we let our trading model development be guided by the insights
gained by studying complex systems [17]. The single most important feature is
surprisingly subtle:
Macroscopic complexity is the result of simple rules of interaction at
the micro level.
In other words, what looks like complex behavior from a distance turns out
to be the result of simple rules at closer inspection. The profundity of this
observation should not be underestimated, as echoed in the words of Stephen
Wolfram, when he was first struck by this realization [38, p. 9]:
Indeed, even some of the very simplest programs that I looked at had
behavior that was as complex as anything I had ever seen. It took me
more than a decade to come to terms with this result, and to realize
just how fundamental and far-reaching its consequences are.
By focusing on local rules of interactions in complex systems, the system
can be naturally reduced to a set of agents and a set of functions describing
the interactions between the agents. As a result, networks are the ideal formal
representation of the system. Now the nodes represent the agents and the links
describe their relationship or interaction. In effect, the structure of the network,
i.e., its topology, determines the function of the network.
Indeed, this perspective also highlights the paradigm shift away from mathe-
matical models towards algorithmic models, where computations and simulation
are performed by computers. In other words, the analytical description of com-
plex systems is abandoned in favor of algorithms describing the interaction of
the agents. This approach has given rise to the prominent field of agent-based
modeling [22, 26, 4]. The validation of agent-based models is given by their ca-
pability to replicate patterns and behavior seen in real-world complex systems
by virtue of agents interacting according to simple rules.
Financial markets can be viewed as the epitome of a human-generated com-
plex system, where the trading choices of individuals, aggregated in a market,

11
gives rise to a stochastic and highly dynamic price evolution. In this vein, a
long or short position in the market can be understood as an agent. In detail,
a position pi is comprised of the set {xi , gi }, where xi is the current mid (or
entry price) and gi represents the position size and direction.

3.4 Coastline Trading

Figure 4: Simple rules: the elements of coastline trading. Cascading and de-
cascading trades increase or decrease existing positions, respectively.

In a next step, we combined the event-based price curve with simple rules of
interactions. This means that the agents interact with the coastline according
to a set of trading rules, yielding coastline traders [18, 2, 13]. In a nutshell,
the initialization of new positions and the management of existing positions
in the market are clocked according to the occurrence of directional change
or overshoot events. The essential elements of coastline trading are cascading
and de-cascading trades. For the former, an existing position is increased by
some increment in a loss, bringing the average closer to the current price. For
a de-cascading event, an existing position is decreased, realizing a profit. It
is important to note, that because positions sizes are only ever increased by
the same fixed increments, coastline trading does not represent a Martingale
strategy. In Figures 4 and 5 examples of such trading rules are shown.
With these developments, the second decade drew to a close. Led by the
introduction of event-based time, uncovering scaling-law relations, the novel
framework could be embedded in the larger paradigm related to the study of
complex systems. The resulting trading models were, by construction, auto-
mated, agent-based, contrarian, parsimonious, adaptive, self-similar, and mod-
ular. However, there was one crucial ingredient missing, to render the models
robust and hence profitable in the long-term. And so the journey continued.

12
Figure 5: Real-world example of coastline trading.

3.5 Novel Insights from Information Theory


In a normal market regime, where no strong trend can be discerned, the coast-
line traders generate consistent returns. By construction, this trading model
algorithm is attuned to directional changes and overshoots. As a result, so long
as markets move in gentle fluctuations, this strategy performs. In contrast, dur-
ing times of strong market trends, the agents tend to build up large positions
which they cannot unload. Consequentially, each agents inventory increases in
size. As this usually happens over multiple threshold sizes, the overall resulting
model behavior derails.
This challenge, related to trends, led to the incorporation of novel elements
into the trading model design. A new feature, motivated by information theory
was added. Specifically, a probability indicator was constructed. Equipped with
this new tool, the challenges presented by market trends could now be tackled.
In effect, the likeliness of a current price evolution with respect to a Brownian
motion can be assessed in a quantitative manner.
In the following, we will introduce the probability indicator L. This is an
information theoretic value that measures the unlikeliness of the occurrence of
price trajectories. As always, what is actually analyzed, is the price evolu-
tion which is mapped onto the discretized price curve, which results from the
event-based language in combination with the overshoot scaling law. Point-
wise entropy, or surprise, is defined as the entropy for a certain realization of a
random variable. Following [12], we understand the surprise of the event-based
price curve being related to the transitioning probability from the current state
si to the next intrinsic event sj , i.e., P(si sj ). In detail, given a directional
change threshold , the set of possible events is given by directional changes or
overshoots. In other words, a state at time i is given by si S = {, }.
Given S, we now can understand all possible transitions as happening in the

13
Figure 6: The transition network of states in the event-based representation of the
price trajectories. Directional changes and overshoots are the building blocks of
the discretized price curve, defining intrinsic time.

stylized network of states seen in Figure 6. The evolution of intrinsic time can
progress from a directional change, to another directional change or an over-
shoot. Which, in turn, can transition to another overshoot event or a back
to a directional change .
We define the surprise of the transitions from state si to state sj as

ij = logP(si sj ), (2)

which, as mentioned, is the point-wise entropy that is large when the probability
of transitioning from state si to state sj is small and vice versa. Consequently,
the surprise of a price trajectory within a time interval [0, T ], that has experi-
enced K transitions, is
K
[0,T ]
X
K = logP(sik sik+1 ). (3)
k=1

This is now a measure of the unlikeliness of price trajectories. It is a path


dependent measurement: two price trajectories exhibiting the same volatility
can have very different surprise values.
Following [19], H (1) denotes the entropy rate associated with the state tran-
sitions and H (2) is the second order of informativeness. Utilizing theses building
blocks, the next expression can be defined as
[0,T ]
K K H (1)
= . (4)
K H (2)
This is the surprise of a price trajectory, centered by its expected value, i.e.,
the entropy rate multiplied by the number of transitions, and divide it by the
square root of its variance, i.e., the second order of informativeness multiplied
by the number of transitions. It can be shown that

N (0, 1), for K, , (5)

by virtue of the central limit theorem [33]. In other words, for large K,
converges to a normal distribution. Equation (4) now allows for the introduction
of our probability indicator L, defined as
!
[0,T ]
K K H (1)
L=1 , (6)
K H (2)

14
where is the cumulative distribution function of normal distributions. Thus,
an unlikely price trajectory, strongly deviating form a Brownian motion, leads
to a large surprise and hence L 0. We can now quantify when markets show
normal behavior, where L 1. Again, the reader is referred to [19] for more
details.
We now assess how the overshoot event should be chosen. The standard
framework for coastline trading dictates, that an overshoot event occurs in the
price trajectory when the price moves by in the overshoots direction after
a directional change. In the context of the probability indicator, we depart
from this procedure and define the overshoots to occur when the price moves by
2.525729. This value comes from maximizing the second order informativeness
H (2) and guarantees maximal variability of the probability indicator L. For
details see [19].
The probability indicator L can now be used to navigate the trading models
through times of severe market stress. In detail, by slowing down the increase
of the inventory of agents during price overshoots, the overall trading models
exposure experiences smaller drawdowns and better risk-adjusted performance.
As a simple example, when an agent cascades, i.e., increases its inventory, the
unit size is reduced in times where L starts to approach zero.
For the trading model, the probability indicator is utilized as follows. The
default size for cascading is one unit (lot). If L is smaller than 0.5, this sizing
is reduced to 0.5, and finally if L is smaller than 0.1, then the size is set to 0.1.
Implementing the above mentioned measures allowed the trading model to
safely navigate treacherous terrain, where it derailed in the past. However,
there was still one crucial insight missing, before a successful version of the
Alpha Engine could be designed. This last insight evolves around a subtle
recasting of thresholds which has profound effects on the resulting trading model
performance.

3.6 The Final Pieces of the Puzzle


Coming back full circle, the focus was again placed on the nature of the event
based formalism. By allowing for new degrees of freedom, the trading model
puzzle could be concluded. What before were rigid and static thresholds are
now allowed to breathe, giving rise to asymmetric thresholds and fractional
position changes.
In the context of directional changes and overshoots, an innocuous question
to ask is whether the threshold defining the events should depend on the direc-
tion of the current market. In other words, does it make sense to introduce a
threshold that is a function of the price move direction? Analytically
(
up for increasing prices;
(7)
down for decreasing prices.

These asymmetric thresholds now register directional changes at different val-


ues of the price curve, depending on the direction of the price movement. As a
consequence = (up , down ) denotes the length of the overshoot correspond-
ing to the new upward and downward directional change thresholds. By virtue

15
No Trend ( 2 = 0) Positive Trend ( 2  0) Negative Trend ( 2  0)
0.5 %

0.5 %

0.5 %
0.4 %

0.4 %

0.4 %
0.3 %

0.3 %

0.3 %
down

down

down
0.2 %

0.2 %

0.2 %
0.1 %

0.1 %

0.1 %
0.1 % 0.2 % 0.3 % 0.4 % 0.5 % 0.1 % 0.2 % 0.3 % 0.4 % 0.5 % 0.1 % 0.2 % 0.3 % 0.4 % 0.5 %

up up up

Figure 7: Monte Carlo simulation of the number of directional changes N , seen in


Equation (9), as a function of the asymmetric directional change thresholds up and
down , for a Brownian motion, defined by and . The left-hand panel shows a
realization with no trend, while the other two panels have an underlying trend.

of the overshoot size scaling law

hup i = up , hdown i = down . (8)

To illustrate, let Pt be a price curve, modeled as an arithmetic Brownian


motion Bt with trend and volatility , meaning dPt = dt + dBt . Now the
expected number of upward and downward directional changes during a time
interval [0, T ] is a function

N = N (up , down , , , [0, T ]). (9)

In Figure 7 the result of a Monte Carlo simulation is shown. For the situation
with no trend (left-hand panel) we see the contour lines being perfect circles.
In other words, by following any defined circle, the same number of directional
changes are found for the corresponding asymmetric thresholds. Details about
the analytical expressions and the Monte Carlo simulation regarding the number
of directional changes can be found in [20].
This opens up the space of possibilities, as up to now, only the 45-degree line
in all panels of Figure 7 were considered, corresponding to symmetric thresholds
= up = down . For trending markets, one can observe a shift in the contour
lines, away from the circles. In a nutshell, for a positive trend the expected
number of directional changes is larger if up > down . This reflects the fact
that an upward trend is naturally comprised of longer up-move segments. The
contrary is true for down moves.
Now it is possible to introduce the notion of invariance as a guiding principle.
By rotating the 45-degree line in the correct manner for trending markets, the
number of directional changes will stay constant. In other words, if the trend
is known, the thresholds can be skewed accordingly to compensate. However,

16
it is not trivial to construct a trend indicator that is predictive and not only
reactive.
A workaround is found by taking the inventory as a proxy for the trend. In
detail, the expected inventory size I for all agents in normal market conditions
can be used to gauge the trend: E[I(up , down )] is now a measure of trendiness
and hence triggers threshold skewing. In other words, by taking the inventory as
an invariant indicator, the 45-degree line can be rotated due to the asymmetric
thresholds, counteracting the trend.
A more mathematical justification can be found in the approach of what is
known as indifference prices in market making. This method can be trans-
lated into the context of intrinsic time and agents inventories. It then mandates
that the utility (or preference) of the whole inventory should stay the same for
skewed thresholds and inventory changes. In other words, how can the thresh-
olds be changed in a way that feels the same as if the inventory increases or
decreased by one unit? Expressed as equations

U (down , up , I) = U (down , up , I + 1), (10)

and

U (down , up , I) = U (down , up , I 1), (11)

where U represents a utility function. The thresholds up , down , up , and down
are indifference thresholds.
A pragmatic implementation of such an inventory-driven skewing of thresh-
olds is given by the following equation, corresponding to a long position
(
down 2 if I 15;
= (12)
up 4 if I 30.

For a short position, the fractions are inverted


(
up 2 if I 15;
= (13)
down 4 if I 30.

In essence, in the presence of trends, the overshoot thresholds decrease as a


result of the asymmetric directional change thresholds.
This also motivates a final relaxation of a constraint. The final ingredient of
the Alpha Engine are fractional position changes. Recall that coastline trading
is simply an increase or decrease in the position size at intrinsic time events.
This cascading and de-cascading was done by one unit. For instance, increasing
a short position size by one unit if the price increases and reaches an upward
overshoot. To make this procedure compatible with asymmetric thresholds, the
new cascading and de-cascading events, resulting from the asymmetric thresh-
old, are now done with a fraction of the original unit. The fractions are also
dictated by Equations (12) and (13). In effect, the introduction of asymmetric
thresholds leads to a subdivision of the original threshold into smaller parts,
where the position size is changed by sub-units on these emerging threshold.
An example is shown in Figure 8. Assuming that a short position was opened
at a lower price than the minimal price in the illustration, the directional change

17
Figure 8: Cascading with asymmetric thresholds. A stylized price curve is shown
in both panels. (Left) The original (symmetric) setup with an upward directional
change event (continuous line) and two overshoots (dashed lines). Short position size
increments are shown as downward arrows. (Right) The situation corresponding to
asymmetric thresholds, where intrinsic time accelerates and smaller position size
increments are utilized for coastline trading. See details in text.

will trigger a cascading event. In other words, one (negative) unit of exposure
(symbolized by the large arrows) is added to the existing short position. The
two overshoot events in the left-hand panel trigger identical cascading events. In
the right-hand panel, the same events are augmented by asymmetric thresholds.
Now up = down /4. As a result, each overshoot length is divided into four
segments. The new cascading regime is as follows: increase the position by one-
fourth of a (negative) unit (small arrow) at the directional change and another
fourth at the first, second, and third asymmetric overshoots each. In effect, the
cascading event is smeared out and happens in smaller unit sizes over a longer
period. For the cascading events at the first and second original overshoot, this
procedure is repeated.
This concludes the final chapter in the long history of the trading model
development. Many insights from diverse fields were consolidated and a unified
modelling framework emerged.

4 The Nuts and Bolts: A Summary of the Al-


pha Engine
All the insights gained along this long journey need to be encapsulated and
translated into algorithmic concepts. In this section, we summarize in detail
the trading model behavior and the specify the parameters.
The intrinsic time scale dissects the price curve into directional changes
and overshoots, yielding an empirical scaling law equating the length of the
overshoot to the size of the directional change threshold , i.e., hi . This
scaling law creates an event based language, that clocks the trading model.
In essence, intrinsic time events define the coastline trading behavior with its
hallmark cascading and de-cascading events. In other words, the discrete price

18
curve with occurrences of intrinsic time events triggers an increase or decrease
in position sizes.
In detail, an intrinsic event is either a directional change or a move of size
in the direction of the overshoot. For each exchange rate, we assign four coast-
line traders CTi [up/down (i)], i = 1, 2, 3, 4, that operate at various scales, with
upward and downward directional change thresholds equaling up/down (1) =
0.25%, up/down (2) = 0.5%, up/down (3) = 1.0%, and up/down (4) = 1.5%.
The default size for cascading and de-cascading a position is one unit (lot).
The probability indicator Li , assigned to each coastline trader, is evaluated on
the fixed scale (i) = up/down (i). As a result, its states are directional changes
of size (i) or overshoot moves of size 2.525729 i . The default unit size for
cascading is reduced to 0.5 if Li is smaller than 0.5. Additionally, if Li is smaller
than 0.1, then the size is further reduced to 0.1.
In case a coastline trader accumulates an inventory with a long position
greater than 15 units, the upward directional change threshold up (i) is in-
creased to 1.5 of its original size, while the downward directional change thresh-
old down (i) is decreased to 0.75 of its original size. In effect, the ratio for the
skewed thresholds is up (i)/down (i) = 2. The agent with the skewed thresh-
olds will cascade when the overshoot reaches 0.5 of the skewed threshold, i.e.,
half of the original threshold size. In case the inventory with long position
is greater than 30, then the upward directional change threshold up (i) is in-
creased to 2.0 of its original size and the downward directional change threshold
down (i) is decreased to 0.5. The ratio of the skewed thresholds now equals
up (i)/down (i) = 4 . The agent with these skewed thresholds will cascade when
the overshoot extends by 0.25 of the original threshold, with one-fourth of the
specified unit size. This was illustrated in the right-hand panel of Figure 8. The
changes in threshold lengths and sizing is analogous for short inventories.
This concludes the description of the trading model algorithm and the mo-
tivation of the chosen modeling framework. Recall that the interested reader
can download the code from GitHub [35].

5 Conclusion and Outlook


The trading model algorithm described here is the result of a meandering jour-
ney that lasted for decades. Guided by an overarching event-based framework,
recasting time as discrete and driven by activity, elements from complexity the-
ory and information theory were added. In a nutshell, the proposed trading
model is defined by a set of simple rules executed at specific events in the mar-
ket. This approach to designing automated trading models yields an algorithm
that fulfills many desired features. Its parsimonious, modular, and self-similar
design results in behaviour that is profitable, robust, and adaptive.
Another crucial feature of the trading model is that it is designed to be
counter trend. The coastline trading ensures that positions, which are going
against a trend, are maintained or increased. In this sense, the models provide
liquidity to the market. When market participants want to sell, the investment
strategy will buy and vice versa. This market-stabilizing feature of the model is

19
beneficial to the markets as a whole. The more such strategies are implemented,
the less we expect to see runaway markets but healthier market conditions
overall. By construction, the trading model only ceases to perform in low-
volatility markets.
It should be noted that the model framework presented here can be real-
ized with reasonable computational resources. The basic agent-based algorithm
shows profitable behavior for four directional change thresholds, on which the
positions (agents) live. However, by adding more thresholds the model behav-
ior is expected to become more robust, as more information coming from the
market can be processed by the trading model. In other words, by increasing
the model complexity the need for performant computing becomes relevant for
efficient prototyping and back-testing. In this sense, we expect the advance-
ments in high performance computing in finance to positively impact the Alpha
Engines evolution.
Nevertheless, with all the merits of the trading algorithm presented here,
we are only at the beginning. The Alpha Engine should be understood as
a prototype. It is, so to speak, a proof of concept. For one, the parameter
space can be explored in greater detail. Then, the model can be improved by
calibrating the various exchange rates by volatility, or by excluding illiquid ones.
Furthermore, the model treats all the currency pairs in isolation. There should
be a large window of opportunity for increasing the performance of the trading
model by introducing correlation across currency pairs. This is a unique and
invaluable source of information not yet exploited. Finally, a whole layer of risk
management can be implemented on top of the models.
We hope to have presented a convincing set of tools motivated by a consistent
philosophy. If so, we invite the reader to take what is outlined here and improve
upon it.

A A History of Ideas
This section is a personal recount of the historical events that would ultimately
lead to the development of the trading model algorithm outlined in this chapter,
told by Richard B. Olsen:
The development of the trading algorithm and model framework dates back
to my studies in the mid 70s and 80s. From the very start my interests in
economics were influenced by my admiration of the scientific rigor of natural
sciences and their successful implementations in the real world. I argued that
the resilience of the economic and political systems depends on the underlying
economic and political models. Motivated to contribute to the well being of
society I wanted to work on enhancing economic theory and work on applying
the models.
I first studied law at the University of Zurich and then, in 1979, moved
to Oxford to study philosophy, politics, and economics. In 1980 I attended a
course on growth models by James Mirrlees, who, in 1996, would receive a Nobel
prize in economics. In his first lecture he discussed the short-comings of the
models, such as [25]. He explained that the models are successful in explaining

20
growth as long as there are no large exogenous shocks. But unanticipated events
are inherent to our lives and the economy at large. I thus started to search
for a model framework that can both explain growth and handle unexpected
exogenous shocks. I spent one year studying the Encyclopedia Britannica and
found my inspiration in relativity theory.
In my 1981 PhD thesis, titled Interaction between Law and Society, at
the University of Zurich, I developed a new model framework that describes in
an abstract language, how interactions in the economy occur. At the core of
the new approach are the concepts of object, system, environment, and event-
based intrinsic time. Every object has its system that comprises all the forces
that impact and influence the object. Outside the system is its environment
with all the forces that do not impact the object. Every object and system
has its own frame of reference with an event-based intrinsic time scale. Events
are interactions between different objects and their systems. I concluded that
there is no abstract and universal time scale applicable to every object. This
motivated me to think about the nature of time and how we use time in our
everyday economic models.
After finishing my studies, I joined a bank working first in the legal depart-
ment, then in the research group, and finally joined the foreign exchange trading
desk. My goal was to combine empirical work with academic research, but was
disappointed with the pace of research at the bank. In the mid 80s, there was
the first buzz about start-ups in the United States. I came up with a business
idea: banks have a need for quality information to increase profitability, so there
should be a market for quality real time information.
I launched a start-up with the name of Olsen & Associates. The goal was
to build an information system for financial markets with real time forecasts
and trading recommendations using tick-by-tick market data. The product
idea combined my research interest with an information service, which would
both improve the quality of decision-making in financial markets and generate
revenue to fund further research. The collection of tick market data began in
January 1986 from Reuters. We faced many business and technical obstacles,
where data storage cost was just one of the many issues. After many setbacks
we successfully launched our information service and eventually acquired 60 big
to mid-sized banks across Europe as customers.
In 1990, we published our first scientific paper [29] revealing the first scaling
law. The study showed that intraday prices have the same scaling law exponent
as longer-term price movements. We had expected two different exponents: one
for intraday price movements, where technical factors dictate price discovery,
and another for longer-term price movements that are influenced by fundamen-
tals. The result took us by surprise and was evidence that there are universal
laws that dictate price discovery at all scales. In 1995 we organized the first
high frequency data conference in Zurich, where we made a large sample of tick
data available to the academic community. The conference was a big success
and boosted market microstructure research, which was in its infancy at that
time. In the following years we conducted exhaustive research testing all possi-
ble model approaches to build a reliable forecasting service and trading models.
Our research work is described in the book [15]. The book covers data collection

21
and filtering, basic stylized facts of financial market time series, the modelling
of 24 hour seasonal volatility, realized volatility dynamics, volatility processes,
forecasting return and risks, correlation, and trading models. For many years
the book was a standard text for major hedge funds. The actual performance of
our forecasting and trading models was, however, spurious and disappointing.
Our models were best in class, but we had not achieve a breakthrough.
Back in 1995, we were selling tick-by-tick market data to top banks and
created a spinoff under the name of OANDA to market a currency converter on
the emergent Internet and eventually build a foreign exchange market making
business. The OANDA currency converter was an instant success. At the start
of 2001, we were completing the first release of our trading platform. At the
same time, Olsen & Associates was a treasure store of information and risk
services, but did not have cash to market the products and was struggling for
funding. When the Internet bubble burst and markets froze, we could not pay
our bills and the company went into default. I was able to organize a bailout
with a new investor. He helped to salvage the core of Olsen & Associates with
the aim of building a hedge fund under the name of Olsen Ltd and buying up
the OANDA shares.
In 2001, the OANDA trading platform was a novelty in the financial indus-
try: straight through processing, one price for everyone, and second-by-second
interest payments. At the time, these were true firsts. At OANDA, a trader
could buy literally 1 EUR against USD at the same low spread as a buyer of
1 million EUR against USD. The business was an instant success. Moreover,
the OANDA trading platform was a research laboratory to analyse the trades
of ten thousands of traders, all buying and selling at the same terms and condi-
tions, and observe their behaviour patterns in different market environments. I
learned hands on, how financial markets really work and discovered that basic
assumptions of market efficiency that we had taken for granted at Olsen & As-
sociates were inappropriate. I was determined to make a fresh start in model
development.
At Olsen Ltd I made a strategic decision to focus exclusively on trading
model research. Trading models have a big advantage over forecasting models:
the profit and losses of a trading model are an unambiguous success criterion of
the quality of a model. We started with the forensics of the old model algorithms
and discovered that the success and failure of a model depends critically on the
definition of time and how data is sampled. Already at Olsen & Associates we
were sensitive to the issue of how to define time and had rescaled price data
to account for the 24 hour seasonality of volatility, but did not succeed with a
more sophisticated rescaling of time. There was one operator that we had failed
to explore. We had developed a directional change indicator and had observed
that the indicator follows a scaling law behaviour similar to the absolute price
change scaling law [21]. This scaling law was somehow forgotten and was not
mentioned in our book [15]. I had incidental evidence that this operator would
be successful to redefine time, because traders use such an operator to analyse
markets. The so-called point and figure chart replaces the x-axis of physical
time with an event scale. As long as a market price moves up, the prize stays
frozen in the same column. When the price moves down by a threshold bigger

22
than the box size, the plot moves to the next column. A new column is started,
when the price reverses its direction.
Then I also had another key insight of the path dependence of market prices
from watching OANDA traders. There was empirical evidence that a margin
call of one trader from anywhere in the world could trigger a whole cascade
of margin calls in the global foreign exchange markets in periods of herding
behaviour. Cascades of margin calls wipe out whole cohorts of traders and
tilt the market composition of buyers and sellers and skew the long-term price
trajectory. Traditional time series models cannot adequately model these phe-
nomena. We decided to move to agent-based models to better incorporate the
emergent market dynamics and use the scaling laws as a framework to calibrate
the algorithmic behaviour of the agents. This seemed attractive, because we
could configure self-similar agents at different scales.
I was adamant to build bare-bone agents and not to clutter our model
algorithms with tools of spurious quality. In 2008, we were rewarded with
major breakthrough: we discovered a large set of scaling laws [16]. I expected
that model development would be plain sailing from thereon. I was wrong. The
road of discovery was much longer than anticipated. Our hedge fund had several
significant drawdowns that forced me to close the fund in 2013. At OANDA,
things had also deteriorated. After raising 100 million USD for 20% of the
company in 2007, I had become chairman without executive powers. OANDA
turned into a conservative company and lost its competitive edge. In 2012, I
left the board.
In July 2015, I raised the first seed round for Lykke, a new startup. Lykke
builds a global marketplace for all asset classes and instruments on the blockchain.
The marketplace is open source and a public utility. We will earn money by
providing liquidity with our funds and or customers funds, with algorithms as
described in this paper.

23
B Supplementary Material

Trading model P&L for Geometric Random Walk


35%
P&L
0%

Time

Figure 9: Profit & Loss for a time series, generated by a geometric random walk
of 10 million ticks with annualized volatility of 25%. The average of 60 Monte
Carlo simulations is shown. In the limiting case, the P&L curve becomes a smooth
increasing line.

24
% Jan Feb Mar Apr May June July Aug Sep Oct Nov Dec Year
2006 0.16 0.15 0.07 0.12 0.22 0.17 0.19 0.20 0.18 0.08 -0.00 0.04 1.58
2007 0.08 0.22 0.14 0.02 -0.05 -0.03 0.32 0.59 0.07 0.11 0.47 0.20 2.03
2008 0.24 0.07 0.05 0.50 0.26 0.09 0.26 0.16 0.66 2.22 1.27 0.98 6.03
2009 1.14 1.41 1.17 1.00 0.75 0.59 0.22 0.19 -0.13 0.28 0.06 0.25 7.70
2010 0.15 -0.34 0.24 0.14 0.30 0.17 0.27 -0.02 0.03 0.06 0.14 -0.31 1.42

25
2011 0.45 0.13 0.11 -0.16 0.04 -0.06 -0.40 0.43 0.45 -0.03 0.32 -0.03 0.97
2012 -0.08 0.19 0.29 0.08 -0.12 0.15 -0.20 0.23 0.10 0.13 0.12 0.11 0.86
2013 -0.17 -0.01 -0.10 -0.08 0.32 0.52 0.04 0.24 -0.10 0.01 -0.01 -0.16 0.77

Table 1: Monthly performance of the unleveraged trading model. The P&L is given in percentages. All 23 currency
pairs are aggregated.
References
[1] R. Albert and A.L. Barabasi. Statistical Mechanics of Complex
Networks. In: Review of Modern Physics 74.1 (2002), pp. 4797.
[2] Monira Aloud, Edward Tsang, Alexandre Dupuis, and Richard
Olsen. Minimal agent-based model for the origin of trading ac-
tivity in foreign exchange market. In: 2011 IEEE Symposium
on Computational Intelligence for Financial Engineering and Eco-
nomics (CIFEr). IEEE. 2011, pp. 18.
[3] Monira Aloud, Edward Tsang, Richard B Olsen, and Alexandre
Dupuis. A directional-change events approach for studying finan-
cial time series. In: Economics Discussion Papers 2011-28 (2011).
[4] Jrgen Vitting Andersen and Didier Sornette. A mechanism for
pockets of predictability in complex adaptive systems. In: EPL
(Europhysics Letters) 70.5 (2005), p. 697.
[5] Han Ao and Edward Tsang. Capturing Market Movements with
Directional Changes. In: Working paper: Centre for Computa-
tional Finance and Economic Agents, Univ. of Essex (2013).
[6] Pooneh Baghai, Onur Erzan, and Ju-Hon Kwek. The $64 trillion
question: Convergence in asset management. 2015.
[7] Amer Bakhach, Edward P. K. Tsang, and Wing Lon Ng. Fore-
casting Directional Changes in Financial Markets. In: Working
paper: Centre for Computational Finance and Economic Agents,
Univ. of Essex (2015).
[8] Bank of International Settlement. Triennial Central Bank Survey
of foreign exchange and OTC derivatives markets in 2016. Mone-
tary and Economic Department. 2016.
[9] A.L. Barabasi and R. Albert. Emergence of Scaling in Random
Networks. In: Science (1999), p. 509.
[10] Antoine Bouveret, Cyrille Guillaumie, Carlos Aparicio Roqueiro,
Christian Winkler, and Steffen Nauhaus. High frequency trading
activity in EU equity markets. 2014.
[11] Jim X Chen. The Evolution of Computing: AlphaGo. In: Com-
puting in Science & Engineering 18.4 (2016), pp. 47.
[12] T.M. Cover and J.A. Thomas. Elements of information theory.
New York, NY, USA: John Wiley & Sons, 1991.

26
[13] Alexandre Dupuis and Richard B. Olsen. High Frequency Fi-
nance: Using Scaling Laws to Build Trading Models. In: Hand-
book of Exchange Rates. Ed. by Ian W. Marsh Jessica James and
Lucio Sarno. John Wiley & Sons, Inc., 2012, pp. 563584. isbn:
9781118445785. doi: 10.1002/9781118445785.ch20. url: http:
//dx.doi.org/10.1002/9781118445785.ch20.
[14] J Doyne Farmer and Duncan Foley. The economy needs agent-
based modelling. In: Nature 460.7256 (2009), pp. 685686.
[15] Ramazan Gencay, Michel Dacorogna, Ulrich A Muller, Olivier Pictet,
and Richard Olsen. An introduction to high-frequency finance. Aca-
demic press, 2001.
[16] J. B. Glattfelder, A. Dupuis, and R. B. Olsen. Patterns in high-
frequency FX data: Discovery of 12 empirical scaling laws. In:
Quantitative Finance 11.4 (2011), pp. 599614.
[17] James B. Glattfelder. Decoding Complexity. Springer, Berlin, 2013.
[18] James B. Glattfelder, Thomas Bisig, and Richard B. Olsen. R&D
Strategy Document. Tech. rep. A Paper by the Olsen Ltd. Research
Group, 2010. url: https://arxiv.org/abs/1405.6027.
[19] Anton Golub, Gregor Chliamovitch, Alexandre Dupuis, and Bastien
Chopard. Multiscale representation of high frequency market liq-
uidity. In: Algorithmic Finance 5.1 (2016).
[20] Anton Golub, James B. Glattfelder, Vladimir Petrov, and Richard
B. Olsen. Waiting Times and Number of Directional Changes in In-
trinsic Time Framework. Lykke Corp & University of Zurich Work-
ing Paper. 2017.
[21] Dominique M Guillaume, Michel M Dacorogna, Rakhal R Dave,
Ulrich A Muller, Richard B Olsen, and Olivier V Pictet. From the
birds eye to the microscope: A survey of new stylized facts of the
intra-daily foreign exchange markets. In: Finance and stochastics
1.2 (1997), pp. 95129.
[22] Dirk Helbing. Agent-based modeling. In: Social self-organization.
Springer, 2012, pp. 2570.
[23] John C. Hull. Options, Futures and other Derivative Securities. 9th
edition. Pearson, London, 2014.
[24] ISDA. Central Clearing in the Equity Derivatives Market. 2014.
[25] Nicholas Kaldor and James A Mirrlees. A new model of economic
growth. In: The Review of Economic Studies 29.3 (1962), pp. 174
192.

27
[26] Thomas Lux and Michele Marchesi. Volatility clustering in fi-
nancial markets: a microsimulation of interacting agents. In: In-
ternational journal of theoretical and applied finance 3.04 (2000),
pp. 675702.
[27] B. Mandelbrot. The Variation of Certain Speculative Prices. In:
Journal of Business 36.4 (1963).
[28] Ulrich A Muller, Michel M Dacorogna, Rakhal D Dave, Olivier V
Pictet, Richard B Olsen, and J Robert Ward. Fractals and intrin-
sic time: A challenge to econometricians. In: presentation at the
XXXIXth International AEA Conference on Real Time Economet-
rics, 14-15 Oct 1993 in Luxembourg (1993).
[29] Ulrich A Muller, Michel M Dacorogna, Richard B Olsen, Olivier
V Pictet, Matthias Schwarz, and Claude Morgenegg. Statisti-
cal study of foreign exchange rates, empirical evidence of a price
change scaling law, and intraday analysis. In: Journal of Banking
& Finance 14.6 (1990), pp. 11891208.
[30] Mark EJ Newman. The structure and function of complex net-
works. In: SIAM review 45.2 (2003), pp. 167256.
[31] M.E.J. Newman. Power Laws, Pareto Distributions and Zipfs
Law. In: Contemporary Physics 46.5 (2005), pp. 323351.
[32] Vilfredo Pareto. Cours dEconomie Politique. In: (1897).
[33] H. D. Pfister, Soriaga J. B., and P. H. Siegel. On the Achievable
Information Rates of Finite State ISI Channels. In: Proc. IEEE
Globecom. Ed. by David Kurlander, Marc Brown, and Ramana
Rao. ACM Press, Nov. 2001, pp. 4150.
[34] Tom Roseen. Are Quant Funds Worth Another Look? 2016.
[35] The Alpha Engine: Designing an Automated Trading Algorithm
Code. https://github.com/AntonVonGolub/Code/blob/master/
code.java. Accessed: 2017-01-04. 2017.
[36] Johannes Voit. The Statistical Mechanics of Financial Markets. 3rd
edition. Springer, Berlin, 2005.
[37] G.B. West, J.H. Brown, and B.J. Enquist. A General Model for
the Origin of Allometric Scaling Laws in Biology. In: Science
276.5309 (1997), p. 122.
[38] Stephen Wolfram. A New Kind of Science. Wolfram Media, Cham-
paign, 2002.
[39] World Bank. World Development Indicators database. 2015.

28
[40] George Kingsley Zipf. Human behavior and the principle of least
effort. Addison-Wesley, Reading, MA, 1949.

29

You might also like