Price Action Lab Blog

Premium Market Analysis

Technical Analysis

A Few Facts About Machine Designed Trading Systems

When trading systems generated by computers are employed in actual trading chances are that their performance will be negative, or even disastrous, unless the process that generates these systems and its output are properly evaluated. This post provides an account of my experience in this area and an overview to the issues involved with links to detailed posts.

The idea of using computers to generate trading systems goes back to the late 1980s when academic researchers used genetic algorithms to identify chart patterns for the purpose of evaluating the efficient market hypothesis and the predictive capacity of technical analysis. Traders working in this area for the most part elected not to disclose their finding and applications. I started developing trading system machine design algorithms in 1994 using several different approaches. In 1999 I disclosed part of my finding in my book “Short-term Trading With Price Patterns”. Some of the price patterns discovered by an algorithm I developed continued to perform well for many years, as reported in another post.

Trading System Synthesis (TSS) and Forward Synthesis of Trading Systems™ (FSTS) are concepts I have presented in my books and articles since 1999. In an article in Active Trader Magazine (”Price Pattern Autopilot”, Active Trader Magazine, September 2002, p. 70.) I presented a software program that could identify repeatable price pattern formations in historical data that fulfilled user-defined performance criteria and risk/reward objectives and also generated exact code for Tradestation, as well as, pseudo code and formula code for other platforms.

What works and what doesn’t

My most important finding during those early times was that markets changed frequently enough to render all lagging indicators useless for the purpose of developing consistently profitable trading systems. This is of course to be expected since most traders use these indicators and due to the zero-sum nature of most markets the various long/short signals derived from them cancel each other and the net result is negative when trading friction is accounted for. Another important finding was that neural networks and genetic algorithms because of their nature are curve-fitting methods. The results obtained from machine designed algorithms based on such data-mining methods are always fitted to the price series. This is not bad in principle but it is very hard to differentiate the very few potentially good systems from a very high number of random but curve-fitted ones. The data-mining bias introduced by the repeated use of an in-sample to search for profitable combinations of indicators and exit signals guarantees that eventually some random system(s) will pass the out-of-sample test(s) just because the market conditions are just right for that. When the market conditions change the performance of these system(s) deteriorates fast. Essentially, those who choose the best performer(s) out of a large number of candidates that result from the repeated use of an in-sample are often fooled by randomness if no effort is made to minimize the bias.

Systems designed for intraday trading often cannot effectively deal with noise and eventually fail. More importantly, intraday traders are doomed in many markets because the trading friction erodes gains. This is especially true for forex and futures trading. Anyone who is trying to develop intraday trading systems is probably wasting his time because conditions in those markets change along with technological progress. For example, a major shift in market conditions took place with the introduction of high frequency trading. There is no way for anyone to compete in intraday time frames with a massive network of automated high frequency robots generating high levels of noise and past performance using historical data is misleading. Essentially, high frequency robots render backtesting results form intraday timeframes useless. The only chance traders have for consistent profits above risk-free interest rate is by trading longer timeframes with daily bars where the impact of high frequency trading is at a minimum.

Win rate is the most important metric to maximize

In the late 1990s and after having used my algorithms to participate in two futures trading contests lasting for a year each and with real money (third and fourth place winner) I decided that the most promising method of trading short-term must rely on repeatable pure price patterns. Pure in the sense that other than the exits there were no parameter involved in their logic and no subjectivity about their formation. In addition, my findings indicated that the most important metrics to maximize were the number of trades, the win rate and the average win to average loss ratio. Recall that the profit factor is completely and uniquely determined by the win rate and the average win to average loss ratio and thus it was redundant to use that too. All other metrics just increased the dimensionality of data-mining and did not contribute to discovering robust systems. The reason for the importance of a high win rate is that the risk of ruin decreases as it increases. This is also why some market wizards in the past talked about the need of “high probability setups”. It is possible to have positive performance with a low win rate and high average win to average loss ratio but the risk of ruin is a lot higher than that of a system with high win rate system and the same average win to average loss ratio. This should be self-evident and it is also easy to show mathematically. However, many trading system developers elect to ignore this fact because it is hard to find systems with high win rate and performance that is statistically significant. I have also shown mathematically that one major cause of failure of trend-following systems is their low win rate that results in an inability to perform well during period of reduced trend magnitudes.

No guarantee of future performance

Finally, even if data-mining bias is minimized and a number of statistical tests are used to make sure that the performance of a data-mined system is better than random with high significance, systems may fail for an extended period of time because of changing market conditions. Therefore, diversification in the sense of trading several markets but also several systems is necessary to dampen the effects of bad performers.

 Disclaimer

 

Comments are Closed

Theme by Anders Norén