The Woes Of A (Quant) Researcher
Introduction
It’s been awhile since I was a researcher, but I try very hard to remember what were my frustrations, insecurities and concerns when I was still not yet wearing the big boy pants.
I am writing this for two audiences. First, for researchers who need to know they are not alone. Second, for PMs who want to understand what keeps their people up at night.
I want to surface the same disclaimer as the “PM” piece. Most of these problems have no clean solutions. They are tensions to be managed, not puzzles to be solved.
The Attribution Problem
Who gets credit when a signal works?
You built the feature. Someone else built the execution model. A third person tuned the portfolio optimizer. The signal makes money. Whose signal is it?
In theory, collaboration should be rewarded. In practice, comp is zero-sum. If you help someone else’s signal work, they get the credit. If you build the feature that enables their breakthrough, your contribution becomes invisible infrastructure. The person whose name is on the signal is the person who gets paid.
So you hoard. You work on your thing. You help others just enough to seem collegial, not enough to dilute your ownership. Sometimes, talented researchers refuse to share preprocessing code because it might help a colleague produce a competing signal.
Everyone knows collaboration would produce better signals. The incentive structure says otherwise.
Some firms allow attribution of various authors to a signal or infrastructure, and it certainly helps, but nonetheless is padding around the fact that comp around a signal is zero sum. It would absolutely help if a larger percentage of bonuses were just a slice of the firm’s or pod’s pnl. But, if it’s too large a slice, then you disincentivize personal brilliance and outlier contributions. It’s a fine line to dance around.
Compensation Black Box
Similar to the above. You have no idea if you are being paid fairly.
Comp is shrouded in secrecy. The person sitting next to you might make twice what you make for similar output. You will never know. Firms treat salary information like classified intelligence. Even when you compare notes with friends at other firms, the numbers are noisy. Different bonus structures. Different profit-sharing arrangements.
Did you know that firms and teams have retention bonuses? Do you know that they hand them out when they sense that people are going to leave? There’s this fine line of being extremely productive and capable and yet showing that you can be gone any moment; while still being trustworthy enough to be “promoted” to the “inner circle”.
Career Path Ambiguity
What is the endgame?
Become a PM? Stay a senior researcher forever? Move to tech? The path is unclear, and the skills you are building may or may not transfer.
Being a PM means stress, responsibility, and decisions you cannot delegate. Some people want that. Some people look at their PM and think: I never want that job.
But if you do not want to be a PM, what are you building toward?
Then there is the plateau. You are senior. You are good. But you are not PM material, either by choice or by circumstance. Now what? You are too expensive to be junior, too specialized to move elsewhere. The “up or out” pressure never goes away.
There are a few “head of research” positions in every firm. Can you make it there? There is not much room for a stable equilibrium where you can just be good at your job for twenty years and still expect your salaries and bonuses to rise.
Researcher is not a terminal career goal in many shops.
The Control You Don’t Have
Your job depends on PnL. But you do not control the portfolio.
The PM decides sizing, risk allocation, and which signals to run. You can build the best signal of your career and watch it get allocated 0.5% of the book. You can build something mediocre and see it sized up because it happens to diversify existing exposures. The connection between your work and your comp runs through decisions you do not make.
This is the black box above you. You hand over signals. Decisions come back. You do not always understand why. The PM has context you lack: risk considerations, correlation with other signals, capital constraints, management preferences. But from your seat, it feels arbitrary. Why did my signal get cut? Why is that signal still running at full size?
You do not get to ask these questions without seeming difficult. The information asymmetry is structural. You are accountable for results you cannot fully control.
Sometimes researchers build exceptional signals that never got deployed because of timing, capital constraints, or portfolio considerations that had nothing to do with signal quality. The signal was good. The researcher did everything right and yet, it can still not matter.
The Signal Graveyard
Most of your ideas do not work.
You spend weeks on something that looked promising. You clean the data. You engineer the features. You run the backtests. And it dies in validation. The in-sample looked great. The out-of-sample looks like noise.
This is expected. Everyone tells you that most research fails. Knowing that does not make it less demoralizing when you are on your fifth dead end in a row. When your last month of work produced nothing usable. When you start to wonder if you have lost whatever intuition made you good at this.
The signal graveyard is not a metaphor. It is a folder on your computer filled with abandoned projects. Each one represents weeks of effort. Each one taught you something, and yet, none of them made any money.
There is a particular flavor of despair that comes from running yet another backtest and watching the equity curve go nowhere. You did everything right. The idea was reasonable. The implementation was clean. And it still did not work.
The researchers who survive are the ones who can metabolize failure without letting it accumulate. Each dead signal is data, not a verdict. But maintaining that mindset across years of mostly-failed experiments requires psychological resilience that most people are not prepared for.
The Boring Work Mandate
You did not get a PhD to debug data pipelines.
But here you are. Spending 40% of your time on infrastructure, monitoring, and maintenance. The PM says it is important. You know it is important. You still resent it.
Data quality is a grind. Something changed upstream. A vendor reformatted their files. A ticker changed hands. You spend three days hunting a bug that turns out to be a data issue.
The gap between the intellectual self-image of a quant researcher and the daily reality of the job is substantial. You imagined yourself discovering alpha. You find yourself writing SQL queries. The discovery happens, but it is maybe 30% of the actual job. The rest is plumbing.
The Information Asymmetry
You write detailed research reports. You get back one-line responses.
You do not know if your signal is running, at what size, or how it is performing. The information flows up but not down. Decisions emerge. Allocations change. You are the last to know why.
Then there is knowledge hoarding. The senior researcher who has been there for ten years knows where all the bodies are buried. They know which data sources are reliable, which features have hidden lookahead bias. They are not writing it down. Maybe it is job security. Maybe they are just busy. Either way, you are operating with incomplete information.
When that person leaves, the knowledge goes with them. You discover gaps you did not know existed. You do not have sufficient information to “put together the puzzle” at your next job.
The Randomness Problem
Even if you are skilled, outcomes are stochastic.
You can be a good researcher and have a bad year. You can be a mediocre researcher and have a great year. Over time, skill shows. But “over time” might be longer than your patience or your employer’s.
This creates a credibility problem. You know your process is sound. You know you are doing good research. But your recent results are weak, and results are what people see. Your PM is getting pressure from above. Your job security depends on PnL that is 60% variance and 40% skill on any reasonable time horizon.
The worst case is when your pod gets fired because of your PM’s decisions, not your research. You built good signals. The PM sized them wrong. Or the portfolio construction created exposures that tanked in a regime change. Your signal was fine. You still got cut.
This is the hardest thing to accept. You can do everything right and still lose. The sample sizes are too small. The signal-to-noise ratio is too low. Luck is a first-order factor in any reasonable evaluation period.
Conclusion
The woes of a quant researcher are tensions to be managed across an entire career. The attribution problem, the compensation opacity, the career ambiguity, the control you lack, the signal graveyard, the boring work mandate, the alpha decay, the information asymmetry, the silo problem, the randomness.
If you are a researcher, I hope you feel less alone.
If you are a PM, the things that keep your researchers up at night are the things that, if you address them, will make you a better manager. The attribution clarity. The communication. The career visibility. The acknowledgment that most research fails.

