Large language models, such as ChatGPT, are threatening to disrupt most areas of life and work. Financial trading is no exception. The potential for LLMs to understand markets rather than just recognize patterns sets them apart from earlier versions of machine learning and artificial intelligence that have failed to achieve much notable trading success.
The basic problem is that financial prices are nearly all noise, they are very close to random walks. Lots of smart people and algorithms conspire to eliminate any signal that can be used for profit. It’s like trying to understand text that is deliberately written to be misleading. Traditional AI is more successful when signals are stronger relative to noise.
Before we get deeper into what sets modern LLMs apart, let me lay out why you should care, even if you have no interest in computerized financial trading. Trading is the foundation of finance and even small changes in mechanisms exert profound effects on the market, which translate into profound economic consequences.
High-frequency trading, introduced in the late 1990s, didn’t just link end-buyers and end-sellers more quickly and efficiently. It vastly increased trading volumes, lowered transaction costs, and knocked humans out of the equity-trading business — a transformation Michael Lewis explored in Flash Boys. It led to zero-commission brokerages and zero-fee index funds — eliminating the revenues that brokers and asset managers had relied upon since they were created. It required a fundamental re-engineering of financial regulation. But HFT didn’t just restructure two major financial businesses, challenge regulators and cut costs to end-investors, it gave us new phenomena like flash crashes.
In the last half-century other trading innovations have had similarly broad effects. The introduction of public futures and options traded on financial instruments in 1973 created the modern global derivatives economy, which vastly expanded leverage outside the banking system and made it difficult for regulators to monitor or control.
Program trading in the 1980s was blamed for the largest stock market crash in history in 1987, and it has played a part in exaggerating every bubble and crash since. Mortgage-backed securities changed banking, Wall Street and home-buying. In the 21st century, we’ve seen dramatic effects from credit default swaps, collateralized debt obligations and exchange-traded funds.
LLMs are not wholly new, they combine components used in other AI and machine-learning applications such as autoregression and neural networks. These components are similar to what humans try to do, and are embedded in many existing quantitative trading algorithms. The key breakthrough that may lead LLMs to succeed where earlier efforts failed was described in a seminal 2017 paper by Google researchers, “Attention Is All You Need.” (My Bloomberg Opinion colleague Parmy Olson looked at the team and why parent Alphabet Inc. failed initially to capitalize on their research here.)
The scientist and science-fiction writer Isaac Asimov wrote, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka’ but ‘That's funny...’” Insight comes not from confirming or rejecting hypotheses, but from noticing things you ignored in the past.
Conventional science proceeds with specialists asking and answering questions that are known to be interesting given the state of prior knowledge. But imagine an alternative “Journal of That’s Funny,” that listed puzzling observations from all fields, without filtering out the ones that seemed unimportant. People might notice two or three of these puzzles, combine them with something they learned for themselves, and come up with dramatic cross-disciplinary discoveries.
While this might be a colossal waste of time for publish-or-perish academics, computers have nothing but time to correlate millions of “that’s funny” facts too unimportant individually to interest humans. The Google paper suggested that AI should spend less effort figuring out which funny facts were important, and more time correlating all of them. This attitude is familiar to fans of detective fiction where the hero ponders over minor inconsistencies and irrelevancies that only reveal the murderer when assembled in sequence — while the unimaginative assistant or professionals insist on paying attention only to the facts known to be important.
Many quantitative trading shops have been working with LLMs for a while. The basic algorithms are widely available, and LLM developers are easy to find. The expensive part is building and cleaning large datasets and representing the data properly for the models. Moreover, trading highly levered, cross-market portfolios with thousands of positions takes infrastructure and relationships you find mostly at statistical arbitrage, quant equity and systematic global macro shops.
Getting large investors to trust your complex computer systems is another hurdle. I suspect this means that the first movers will be existing large quantitative trading firms rather than startup AI shops — think Citadel, Renaissance Technologies and Jane Street Group, not two gals in a Silicon Valley garage.
If this approach leads to trading profits and is more widely adopted, I think it’s likely to change cross-market financial behavior rather than relative prices within asset classes. We already have good handles on how to value one stock versus another, or one bond versus another, or one piece of real estate versus another. But there is little useful theory or reliable quantitative generalizations about how stocks, bonds, real estate and other asset classes should be priced relative to each other.
A plausible near-future story is LLM-flavored trading models will build large cross-asset-class portfolios similar to what global macro hedge funds do, but with more leverage, more positions, more active trading and no human to explain the thesis. There might be an explainer module added that will give plausible-sounding theses, but there’s little reason to believe these explanations will have any relation to the reason for the positions.
We can hope that the new price relations and correlations will better reflect economic reality, leading to more efficient allocation of capital and better real economic decisions. That’s an article of faith for many people, but not something with much empirical evidence one way or the other.
Whether or not that’s true, the restructuring of cross-market financial relations will disrupt many business models and regulatory regimes. I expect at least as much disruption as we got from HFT, and perhaps as much as we got from public trading of financial futures and options. And if I’m wrong, if LLMs and attention modules fail to gain much trading traction, there are plenty of new ideas in the AI pipeline to take their place.
A message from Advisor Perspectives and VettaFi: Advisors: You're Invited to Exchange! Nothing would be a better start to the new year than if you joined us at Exchange, an in-person conference for members of the financial services community in Miami, Florida on February 11-14th. For a limited time, we're offering you a free Exchange ticket!* Register today with code WINTER24 to claim your pass.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
Read more articles by Aaron Brown