A year ago, the very first slide in our Series A deck had a single sentence on it: "Finance is about to have its Cursor moment."
At the time, it was a bet. Today, it feels like a description of what's already happening.
The Lesson from Coding
Cast your mind back to spring 2025. The coding world was in chaos. Dozens of AI tools were fighting for developer attention. GitHub Copilot shipped agent mode. Claude Code launched quietly and became the most-used AI coding tool in eight months. Cursor went from a VS Code fork nobody had heard of to a $29 billion company. A new word entered the vocabulary: "vibe coding."
Most people called it hype. Within a year, 95% of developers were using AI tools at least weekly.
But here's the part that gets lost in the highlights reel: the biggest gains didn't come from the models themselves.
The models were available to everyone. GPT, Claude, Gemini. Any developer could call the same APIs. What separated the teams pulling dramatically ahead from the ones seeing modest productivity bumps wasn't which model they used. It was what they built around the model.
The breakthrough was embedding the model inside the developer's natural environment, the IDE, so it had full awareness of the codebase rather than just the snippet in the chat window. Then extending it with tools that let it actually act: run tests, edit across files, open pull requests, catch its own errors. The model didn't get smarter in isolation. The system around it got dramatically better at giving the model the context and capabilities it needed to do real work.
Cursor, Claude Code, Copilot. These products aren't interesting because of which LLM sits inside them. They're interesting because of the surface area built around the LLM. The embedding. The extension. The tooling that turns a smart model into a capable agent.
The Trap That Looks Like Progress
Finance is at that same inflection point right now.
Teams everywhere are entering the "we're seeing great early results with Claude" phase. That's real, and it's a necessary step. But using Claude, or any AI tool, will soon be like saying you use the internet. It won't be a differentiator. It'll just be true of everyone.
And when that happens, the question won't be "are you using AI?" It will be "what are you doing with it that actually works at production quality?"
That's where most teams will hit the wall.
The first instinct is to build around the LLM as the core. Treat the model as the center of gravity and pull everything toward it: data, workflows, context. It feels natural. The model is the smart thing, so make it the hub.
The problem is this breaks down fast in practice. To give the model what it needs to reason well, you have to feed it everything. Your full P&L history, your forecast assumptions, your channel data, your inventory positions. That creates real data security exposure. And even setting security aside, you hit a harder constraint: context windows have limits. The more you stuff in, the more the model loses the thread. Quality degrades exactly when the stakes are highest.
This is the trap that looks like progress. Teams spend months building elaborate pipelines to funnel data into an LLM and end up with something brittle, expensive, and not quite trustworthy enough to actually run the business on.
The teams that figured this out in coding learned the lesson the hard way. Don't bring the data to the model. Bring the model to the data. Embed the LLM inside a workflow and data layer that already understands the domain. One that gives the model exactly what it needs, exactly when it needs it, without drowning it in context or exposing data it shouldn't touch.
That's the difference between an AI experiment and production-quality value. Not a smarter model. A better system for the model to operate inside.
Same Models, Wildly Different Outcomes
This is what the Cursor moment actually looked like in coding, and it's what the equivalent moment will look like in finance.
The winners won't be the teams that got access to the best model first. They'll be the teams that figured out how to embed that model inside the workflows where finance actually happens. Connected to the forecast, the P&L, the channel data, the inventory position. Extended with skills that understand how a real business's financial model works.
Same underlying models. Wildly different outcomes. The gap between them won't be intelligence. It will be infrastructure.
The Cursor moment for finance isn't about AI getting smarter. It's about the system around the AI catching up, and the teams that build on top of it pulling so far ahead that the gap stops being closeable.
That moment is closer than most finance teams think.






