Science doesn’t move at the speed of insight. It moves at the speed of execution.
A researcher has a hypothesis. To test it, they face dozens of small tasks: finding the right papers, setting up the right tools, running the analysis, checking the results, figuring out what to do next. Each step is tractable. Strung together across weeks and months, they determine how many questions actually get asked.
Researchers spend more time on logistics than on the science itself. And results that should build on each other don’t, because the overhead of connecting them is too high.
This is the hidden throttle on discovery. Not funding. Not compute. Not talent. The human overhead of running the loop.
Cycle time
In research, the person who tests ten hypotheses while someone else tests one doesn’t just move faster. They learn faster. They find things others miss. They develop intuition that can’t be taught.
Most of what separates productive researchers from unproductive ones isn’t brilliance. It’s cycle time. How quickly they can go from question to answer to next question.
The tools have gotten better. Simulations run faster. Databases are more accessible. Literature is searchable. But the work of connecting these pieces into a coherent loop remains manual. Every researcher reinvents it. Most do it badly.
What if that changed? What if the execution layer of science was infrastructure, not labor?
Not AI that does science. AI that handles the mechanics so scientists can focus on direction. The researcher decides what questions matter. The system handles the how. It finds what’s been done before, sets up the analysis, runs it, checks the results against what’s known, and surfaces what matters for the next step.
This isn’t about replacing scientists. It’s about removing the friction that prevents them from doing what they’re actually good at. The strategic thinking, the intuition for what matters, the ability to see patterns across domains. That’s human. The tedious, repetitive overhead that stretches timelines. That’s infrastructure.
What I see
I work in materials science. What I see: brilliant people generating results that should compound but don’t. One researcher discovers something. Somewhere, another has found something related. Neither knows about the other. Or they do, but the overhead of connecting the work is too high, so it doesn’t happen.
The field isn’t short on data. It’s short on integration. Papers cite each other without building on each other. Results get validated once and then drift from what others are finding.
My broader thesis is about the gap between frontier science and real deployment. Most materials discoveries die not because they couldn’t work, but because no one does the work of making them work at scale. This is the same pattern, applied earlier. Before you can translate a discovery, you have to make it. And the rate of discovery is throttled by execution overhead.
Both are infrastructure problems masquerading as talent problems. Both are under-built because they don’t produce papers. They just make other work possible.
This moment is different because language models can now reason about scientific workflows well enough to connect tools, process outputs, and pull information from papers. The pieces exist. They just need to be orchestrated.
AI is accelerating the generation of results faster than humans can process them. The bottleneck is shifting from “can we compute this” to “can we even run the loop fast enough to learn from what we compute.”
The infrastructure that closes this gap will be foundational. I think it’s one of the highest-leverage things to build right now. Not because it’s novel, but because it’s necessary.
I’m actively working on this. You can see what it looks like in practice on GitHub.