There are software programs for traders that allow combining technical indicators with exit conditions for the purpose of designing systems that fulfill desired performance criteria and risk/reward objectives. In general and due to data-mining bias it is very difficult to differentiate the random systems from those that may possess some intelligence in pairing their trades with the market returns.
Suppose that you have such a program and you want to use it to develop a system for SPY. After a number of iterations, manual or automatic, you get a relatively nice equity curve in the in-sample and in the out-of-sample with a total of about 1000 trades (horizontal axis):
Before obtaining the above equity curve, many combinations of entry and exit methods were tried, usually hundreds or thousands, and in some cases billions or even trillions. A large number of equity curves were generated that were not acceptable. You may think that this is a good equity curve but you also suspect the system may be random due to data-mining bias arising from multiple comparisons. You are absolutely correct. But before going into these issues in more detail, let us consider the case where a trader thinks that by increasing the number of trades, randomness is minimized. This is a possible result:
In this example the number of trades was increased by two orders of magnitude to about 100,000 and both the in-sample and out-of-sample performance look acceptable. Does this mean that in this case the system has lower probability to be random?
The answer is no. Both of the above equity curves were actually generated by tossing a coin with a payout equal to +1 for heads and -1 for tails. The second equity curve was generated after only a few simulations. Both curves are random. You can try the simulation yourself and see how successive random runs can at some point generate nice looking equity curves by luck alone.
The correct logic here is that random processes can generate nice looking equity curves but how can we know if a nice looking equity curve selected from a group of other not so nice looking curves actually represents a random process and the underline algorithm has no intelligence? This inverse question is much more difficult to answer.
The coin toss experiment illustrates how when one uses a process that generates many equity curves, some acceptable and some unacceptable, one may get fooled by randomness. Minimizing data-mining , data snooping and selection bias is a complex and involved process that for the most part falls outside the capabilities of the average developer who lacks an in-depth understanding of these issues. Actually, the methods for analyzing systems for the presence of bias are in many cases more important that the methods used to generate the systems in the first place and are considered an integral part of a trading edge. Below is a list of criteria one could use to minimize the possibility of a random system due to over-fitting:
© 2015 Michael Harris. All Rights Reserved. We grant a revocable permission to create a hyperlink to this blog subject to certain terms and conditions. Any unauthorized copy, reproduction, distribution, publication, display, modification, or transmission of any part of this blog is strictly prohibited without prior written permission.