April 2026

1. Compute Stops Being the Constraint (Finally)

By 2035, raw compute is no longer something modellers think about. Cloud-native execution, massive parallelism, GPU acceleration, and on-demand elasticity mean that “Can we afford to run this?” quietly disappears as a question. What matters instead is how fast insight cycles complete, not how long individual runs take.

This isn’t about bigger models. It’s about orders of magnitude more exploration, i.e. thousands or millions of scenarios becoming routine rather than exceptional.

Once computation becomes effectively infinite, the limiting factor moves somewhere much closer to home.

2. Models Become Modular, Not Monolithic

The 2035 model is unlikely to be a single engine. It’s a composition of components: mortality, lapses, expenses, assets, reinsurance, capital, management actions, each separable, swappable, and independently testable.

This modularity is what allows:

  • Continuous evolution without destabilising everything
  • Parallel development by different teams
  • AI to reason about cause and effect instead of just outputs

Monolithic “all-in-one” engines struggle here. Modular architectures thrive. This is one reason why platforms built around transparent, composable actuarial logic, like Mo.net, age better than those built around opaque execution pipelines.

3. Structured Transparency Replaces Black Boxes

In 2035, transparency is not a philosophical preference but an operational requirement. When models are always on, feeding real decisions, nobody accepts “trust the engine” as an answer. Regulators, boards, and capital providers expect traceability – what changed, why it mattered, and where judgement entered.

This requires:

  • Explicit assumption structures
  • Machine-readable model logic
  • Built-in explainability, not bolt-on documentation

Ironically, this level of transparency is easier to achieve with disciplined platforms than with sprawling bespoke codebases.

4. AI Becomes a Modelling Co-Pilot, not a Feature

AI in 2035 is not a separate tool you “use”, but an embedded capability:

  • Highlighting sensitivities before you ask
  • Surfacing non-linear behaviour automatically
  • Comparing today’s results to historical patterns
  • Drafting explanations, not conclusions

Critically, AI does not decide what assumptions are right but decides where your attention is most valuable. However, this only works if models are fast, structured, and consistent. AI doesn’t cope well with bespoke chaos. It amplifies both good architecture and bad.

5. Data Pipelines Become Boring

In 2026, data integration still consumes a significant amount of end-to-end modelling effort. But by 2035, data pipelines are dull, standardised, and reliable. Not because data got simpler, but because firms finally invested in:

  • Clean interfaces between data and models
  • Clear ownership of transformations
  • End-to-end lineage and business glossaries

When data stops being the daily fire fight, modelling teams can finally focus on thinking again.

6. Governance Moves from Gates to Guardrails

Instead of governance being about approval gates, i.e. “has this run been signed off?”, it becomes about guardrails. This is a subtle but profound shift.

  • Which assumptions are allowed to move?
  • Which ranges trigger escalation?
  • What behaviour is automatically logged and explainable?

Technology enables this by making behaviour observable rather than controlled through friction. Fast models demand smarter governance, not heavier governance.

7. Human Interfaces Catch Up with Machine Speed

One of the least discussed enablers is interface design. In 2035, actuaries don’t scroll through output files. They interact with surfaces, ranges, and dynamic explanations. Visualisation isn’t just cosmetic. The model speaks in shapes and responses, not tables. Without this, even the fastest model is wasted.

Conclusion

Unfortunately, none of these technologies matter in isolation. The real enabler of the 2035 vision is coherence, i.e. models that are fast enough for exploration, structured enough for AI, transparent enough for trust, and governed enough for reality.

That’s why the future doesn’t belong to:

  • Fully bespoke open-source estates, or
  • Fully opaque vendor platforms

It belongs to modelling environments that blend discipline with freedom and treat technology as a way to remove friction, not add ceremony.


Read more