Why World Models Will Fail in Healthcare

Why World Models Will Fail in Healthcare
Photo by Brendan Church / Unsplash

Everyone is excited about world models.

The idea is compelling: learn a latent representation of reality, predict forward, and make better decisions than systems trained on surface-level data.

But in healthcare, most world model efforts are going to fail.

Not because the idea is wrong — but because the implementation will quietly revert to the same flawed assumptions that have limited healthcare AI for decades.


The Core Mistake

Most healthcare systems today implicitly model:

health outcomes = f(measured clinical variables)

Vitals. Labs. Diagnoses. Billing codes.

These are treated as the “state of the world.”

They’re not.

They are partial, delayed, and often distorted observations of an underlying system we barely understand.

World models are supposed to fix this by learning latent state.

But in practice, most teams will still anchor their models to what is easily measurable — because that’s what exists in the data.

So the equation doesn’t actually change.

It just gets a deeper neural network.


Latent State Is Not Just “Hidden Data”

A true world model in healthcare would need to represent things like:

  • patient adherence
  • family involvement
  • cognitive decline
  • socioeconomic stability
  • trust in the care system
  • caregiver reliability

These are not just “missing features.”

They are structural drivers of outcomes — and they are rarely captured, inconsistently observed, and often only inferred indirectly.

If your model doesn’t represent these, it’s not modeling the world.

It’s modeling a projection of it.


The JEPA Trap in Healthcare

Approaches like JEPA (Joint Embedding Predictive Architecture) aim to learn predictive structure without relying on labels.

This is directionally correct.

But in healthcare, there’s a trap:

If the training signal is still derived from observable clinical data, the model will learn to predict what is recorded, not what is real.

You end up with:

  • better embeddings of flawed observations
  • more coherent predictions of incomplete systems
  • higher confidence in the wrong abstractions

Why This Matters

Healthcare is not a clean, observable environment.

It is:

  • sparse
  • delayed
  • human-driven
  • adversarial to measurement

If your world model assumes that the recorded data approximates reality, it will fail in subtle but critical ways:

  • predicting stability where there is hidden risk
  • missing inflection points driven by non-clinical factors
  • optimizing for metrics that don’t translate to outcomes

What Would Have to Change

For world models to work in healthcare, the shift isn’t just architectural — it’s epistemological.

We would need to move toward:

health outcomes = f(latent patient state)

Where latent state is not inferred solely from clinical data, but from a broader, actively constructed understanding of the patient’s environment and behavior.

That likely means:

  • new data collection paradigms
  • tighter feedback loops
  • explicit modeling of uncertainty and missingness
  • systems designed to learn from interaction, not just observation

The Bottom Line

World models will not fail because they are too ambitious.

They will fail because we will implement them conservatively — anchored to the same measurable variables that have always been insufficient.

If we don’t rethink what “state” actually means in healthcare, we won’t get world models.

We’ll just get better predictions of a system we still don’t understand.

Read more