In this post we’ll explore the design, construction and validation of a machine learning based investment model.
A proper articulation of the modeling task will have a major impact on the success of the strategy. The objective is to define what task the AI algorithm will have to perform.
The investment problem can be broken down in many ways ranging from a single stock to a portfolio or execution problem. The single stock problem can be framed as whether a stock should be bought, hold or sold at every point in time. In this discussion, monthly periods will be considered. Under a naive design, it can be viewed as whether a stock will outperform the benchmark portfolio in the upcoming period.
This approach can be translated as a binary classification problem on a time-series. Consequently, the labels (or target variable) that will feed examples for the model to learn from will be defined in the following manner.
For stock i at period T:
Despite having a simple definition of the problem - a monthly binary classification problem - the task at hand isn’t trivial as the signal is hard to capture.
The figure below highlights the difficulty of the task by showing how the under and over performing stocks are distributed among two features. Notice that there’s no clear pattern that relates the stock performance to the available information.
Performing such ad-hoc visual exploration can be beneficial for validating the data preparation work that has been described in previous post.
Once the features, labels and task as been defined, following step is to build and train model.
For binary classification, many approaches have a proven track record:
Machine learning algorithms are characterized by a various number of hyper-parameters that needs to be calibrated for the learning process to leads to be successful. Testing a large number of configurations is needed, which can involve costly computation time. Using a random search is a simple yet effective trick to shorten the process, although less efficient that fancier Bayesian search.
During the fitting process of the model, it can be useful to track to track the progress of the desired metric on a out of sample data to identify at what moment the algorithm starts to overfit and identify an appropriate number of iterations.
Once a best model is obtained, a key remaining task is the assessment of the expected performance it will have in real life.
The above chart provides an intuitive support to the quality of the model. Yet, further tools can help support the assessment. Performing a simulation of model accuracy indicates that we can have a fair confidence in the model accuracy to beat a random selection counterpart.
The ultimate test for the model lies in its ability to generate an expected return greater than that of the benchmark, while ideally limiting the risk of sub-performance inherent to the uncertainty introduced by the active strategy.
It’s important to realize the impacts of adopting an active investment strategy. By exercising a subjective selection of stocks, we expose the portfolio to variations around the benchmark.
The figures below highlight the consequences of selecting a random subset of stocks within the universe. Adopting a random strategy has the same expected return compared to the passive index approach, but introduces significant volatility around that reference path.
An even more stringent test is to not rely on the historical path but rather perform a simulation based on the whole historical volatility.
We can finally consider some formal metrics to complete the model validation step. These stats are derived from the 62 months historical period obtained from our evolutive learning test.
|periods||win ratio||track error||capture up||capture down||drawdown TTM||beta||vol spread||return annual||sharpe||alpha|
This post covered the high-level steps required to reach a final model starting from a problem formulation. This methodology remains generic and applicable to most modeling problematic, though the highly volatile investment environment renders the diagnosis of the model more challenging.
Back to learning