How To Reason About A Messy Future
Introduction
The first time I realized we were heading towards an inflection point was when I heard the music slowing down at my previous role, even as everyone around me pretended nothing would change.
I was managing a team of close to 20 pax in a hedge fund, doing the thing I had been doing for years. For all intents and purposes, I was likely going to do even greater things there. And yet, I moved from a position people would kill for to building a startup from ground zero with a skeleton crew - a move so little understood and widely seen as crazy. With the recent news of massive layoffs, people quitting explicitly to build startups, or quietly quitting and burning tokens at night doing the same, my actions seem a lot less insane now.
I’ve had a few people ask me where I think this all goes. This article is the answer to that. The honest truth is that I’m not really sure about the magnitude of these changes, but if quant finance has taught me anything, it’s that being directionally correct is often enough.
Writing On The Wall
It was ChatGPT o1 that did it for me. Up until that point, I had referred to them only as “LLMs” and not “AI”, I was not yet convinced that any semblance of real intelligence would emerge from them.
But with o1, it was the first time these LLMs could credibly produce code from well-structured prompts. It was still messy. They still suffered from the occasional bout of hallucination and confusion. But here was what mattered: they could actually produce useful code.
The line of reasoning I took was this: once AI could get to a point where they could reproduce useful code, they would recursively write improvements to their own logic and accelerate development at a scale we would not be able to comprehend. Whenever I shared this, people would counter-argue that the code agents wrote was still buggy and not “production-ready.” This misses the point that even humans write buggy code.
We don’t need flawless code to completely stop writing code. We stop writing code the instant we realize that agents produce fewer bugs than us, at a pace that far exceeds us. The bar for fully relegating the burden of coding to agents was so low that once I saw o1 up close, I knew the future was going to change dramatically.
Quant Finance And The Moat Of Knowledge
I thought AI would eventually eat away a vast majority of quant finance, although it was going to take a while, since there was very little publicly available institutional code for LLMs to train on. I imagined software engineering as a pyramid: at the base was basic code monkey work, above that was your senior developer with some architectural thinking, and above that were specialized developers: data scientists, quant developers, and so on. The more your profession required specialized knowledge, the safer you would be.
I thought we would wipe out the entire tranche of code monkeys within 2 years. Then senior developers would start to go. And layer by layer, specialized knowledge would also be incorporated into the LLMs and they too would be wiped out.
It quickly became obvious that the frontier model providers would eventually hire specialized knowledge workers to contribute industry know-how to the frontier models. Specialized knowledge seemed like it would be a moat for the next couple of years, but also end up being eaten away gradually.
The Remaining Moats
There were a few categories of businesses that I thought would be safe from being trivially disrupted within the next 5 years.
The first is proprietary data. Businesses that produced a lot of proprietary data as exhaust would be hard to disrupt. Large podshops like Millennium come to mind, they can collect analyst readings, detailed analysis, recommendations, and actual price changes, and use this data to fine-tune frontier models into something that was not going to be easily replicated. Any business producing proprietary data not trivially obtained by the frontier models would have a longer lease on life.
The second is regulatory friction. Businesses where other humans are a bottleneck seemed much harder to disrupt. Being able to trade in many TradFi markets meant opening broker accounts, getting licenses, signing contracts around the globe. It’s easy to trade crypto, but much harder to trade iron ore in China as a non-Chinese firm. If you need a human to rubber-stamp your progress, the speed of that industry is always going to be bottlenecked by the cost and speed of that approval.
The third is authority as a service. It’s not too hard now to get an agent to draft a legal opinion given a comprehensive study of the matter and the laws surrounding it. And yet we’re still going to pay tens of thousands of dollars for one drafted by a lawyer, because an AI’s legal opinion is worth nothing at this point in time. Smart contract audits are another example. We’re probably already at a level where agents can review smart contracts as well as or better than the top decile of humans, yet most people still buy the stamp of authority from a branded firm. The opinion isn’t what you’re paying for. The authority behind it is.
The fourth is physical intelligence lag. Hardware moves much more slowly than software, and breaking hardware is a lot harder to fix. Physical businesses interacting with the real world are a lot less likely to be disrupted soon. That said, once hardware catches up, the same pyramid logic applies: lower-level jobs go first, then the more specialized ones.
These moats are real, but none of them are permanent. The honest read is that they buy time, not safety.
Reasoning About A Messy Future
When the future is genuinely noisy, when the rate of change is fast enough that most analogies break down, people tend to do one of two things. They either wait for certainty before acting, or they pattern-match to the past (”this is like the internet boom”) and act on the wrong model. Both are mistakes.
It is worth reasoning from first principles under incomplete information. You don’t need to know exactly how something plays out. You just need to be directionally correct, and you need to structure your bets so that being early and wrong is survivable, while being early and right is disproportionately rewarding.
Asymmetry is the whole game when the future is uncertain.
The practical version of this is: ask what has to be true for a given outcome to happen, and then ask how legible the inputs to that outcome already are. The inflection we’re living through was not unforeseeable, the inputs were visible. Code that could write code. Models that improved recursively. Institutional knowledge that could be bought, not just grown. Anyone willing to stare at those inputs clearly could see roughly where they pointed, even without knowing the exact path.
You can recursively reason about this and extrapolate further. I don’t even think we’ve yet caught a glimpse of what it will be like when agents can train themselves, when agents can replicate, when agents become truly autonomous. An agent that can increase its intelligence by 0.1% through a series of actions may not seem significant, but any number that is not 0 increases the probability that the next increment is greater, and so on, so forth. There are vast power laws at play here and it is worth thinking along the lines of what a future looks like under those power laws.
By the time the signal is obvious, the trade is crowded. In markets, you pay for early conviction with uncertainty. In careers and startups, the currency is the same.
So the question isn’t really “what’s going to happen?” The question is: “what do I already know, what direction does it point, and what’s the cost of acting on it now versus waiting?”
One thing that I often see people missing is to notice that action creates information. Action does not happen in a vacuum. When you act on the world, the world replies with information. That information powers iteration. Iteration begets more informed action. That is the nature of progress.
Being still in incomplete information is decay.
Moving towards action is discovery.
Thinking About Next Steps
I knew I had a couple of years if I just wanted to milk the status quo. But a large part of me felt like if I wanted to do something, I would have to start sooner rather than later. I had always wanted to build something truly mine, and it seemed like the window to do that was quickly closing.
To be clear, I know that the largest hedge funds in the world would be fine. They have proprietary data that makes them very difficult to replace. TradFi markets are also bottlenecked by human signatures, both on a regulatory and at times even a trading front. What I do think, however, is that those largest funds will use AI to replace most of their workforce, even terminal career seats like Portfolio Managers. Not immediately, but eventually, surely.
What I felt was that I had about 4-5 years before the foundation model providers hired enough specialized talent to make being an upstart trading firm nearly impossible. In certain markets, like US equities, it already feels that way. I can’t imagine how much more efficient it’s going to look in just a few more years.
There was clearly not going to be space for “second best” pretty soon. I could keep working for the “best”, but it seemed more aligned with my goals to strike now, in a market I had a genuine edge in, with knowledge that was not going to be trivially replicated. So, having that dawg in me, I called it quits and went all in on what eventually became @openforage.
Inflection Point
Today, it’s really starting to feel like the window is visibly closing. The pace of change has stopped feeling gradual, and most people following the space are beginning to realize that what used to take months of improvement now takes weeks.
In my opinion, jobs will not vanish entirely within the next couple of years. There will always be a need for humans. Humans are social creatures, as long as humans are in charge, we want other humans around. And humans don’t trust AI yet, so stamps of authority still need to come from a human. I imagine AI CEOs in the next couple of years, but there will still likely be a human CEO having to “approve” and certify the AI CEO. This idea of human certification cascades down the pyramid. A human manager will manage and certify a bunch of agents working under him.
But the arithmetic of hires will change. If a CEO can prompt an agent more easily than they can prompt you, there’s no need to hire you. Shallow, code-monkey work will be very difficult to find going forward.
To be irreplaceable, you need to operate at a timescale far above current agent limitations - receiving instruction, managing agents, and working with them for weeks, months, or years. Long-term strategic thinking and policy planning is one of the strongest job moats for the foreseeable future. You also need to operate at a scope greater than current agent limitations. Agents have limited context. They know everything about anything, yet cannot trivially see how component A interacts with component B interacts with component C causing cascading effects to component D. They lack scope.
If you can think far and wide, absorb information quickly, make decisions for the long term, and are likeable, you will hold down a job, at least for the foreseeable future.
If you do intend to be an employee, it’s worth taking stock of what your work is actually made of. Some tasks are deeply human defensible. Some will be replaced cheaply over the next couple of years. Do more of the former and less of the latter.
Working for a great firm in a deeply defensible position, one that sits behind real moats, may give you a career runway while the rest of the workforce gets eaten by the foundation models. You can still spend your tokens at night, rolling the dice, trying to build something meaningful.
But if you have a burning desire to contribute a unique verse to the world, think carefully about where your market of choice is heading. If your window to build something defensible is closing, you need to begin operating before the market fully prices in the competition that is coming.
Conclusion
The inputs that create inflection points are legible ahead of time, if you’re willing to look. Most people don’t look, or they look and don’t act, or they wait until the signal is so loud that the opportunity is already priced in.
Don’t ignore the shifting sands. Don’t stay somewhere that’s losing ground while telling yourself you’ll make the leap when the timing is better. There’s no better timing, and the timing rarely announces itself. When it becomes obvious to everyone, the window has normally already closed.
I looked, I made a bet, and now I’m living inside the outcome of that bet — for better or worse.

