Value and institutional investors, market makers, specialists and other savvy professionals profited handsomely in the last three decades from the losses of traders using naive technical analysis, such as chart patterns and simple indicators. The next huge trap is machine learning and the professionals are pushing it hard for obvious reasons.
Anyone with skin-in-the game in the 1990s remembers how various brokerages even offered free technical analysis books detailing many random formations and latter scanners to identify them. They had to offer a dream to retail to keep them in the game and profit from them. Pit traders and specialists would appear on financial TV talking about some chart formations or moving averages when in reality all that mattered to their profitability was order flow from naive participants.
Nowadays the new buzzword is “machine learning.” The dream goes like this: use some attributes from known factors and the machine will make you rich. Is it that simple? Apparently, it is not, it is even worse than the random technical analysis of the last century that caused huge losses to naive traders. Here is why:
Machine learning is a GIGO process (garbage-in, garbage-out) when the features are functions of time and the price series are non-stationary. For example, in image recognition, the features change location, i.e., this is a geometry problem. In the case of non-stationary price series, the features may lose their significance and even affect the importance of other features. The set of relevant features changes with each sample and this makes the problem very complicated, or even unsolvable.
There are ways to deal with this problem that constitute an edge and I hope readers will understand not discussing them here. One thing I can say is that the key is in stable feature construction. This is a difficult problem I started working on in the late 1990s. This may allow developing certain strategies that will perform relatively well when compared to benchmarks. These meta-features also allow developing interesting indicators. One example is the DLPAL engine that has been successful in timing short-term market reversals.
Without the edge of constructing features that are relevant and stable, machine learning will be the next trap that will transfer large wealth from the accounts of naive traders to those of market professionals.
- Sharing ideas in platforms is not an edge by definition
- Copying analysis from academic papers is not an edge
- Trying this or that is not an edge (it’s data mining)
By the way, if the methods of technical traders of the last century suffered from randomness, the new machine learning methods suffer from data-mining bias, which leads back to randomness, a vicious circle.
Traders who do not understand probability theory are seen in platforms trying numerous combinations of machine learning algos with numerous combinations of features until they get something that “looks good.” With probability 95% or higher, that something is a random result due to data-snooping bias, which is a major component of data-mining bias.
If you have any questions or comments, happy to connect on Twitter: @mikeharrisNY
Technical and quantitative analysis of Dow-30 stocks and 30 popular ETFs is included in our Weekly Premium Report. Market signals for longer-term traders are offered by our premium Market Signals service.