Zombie Ideas and Orphan Decisions in AI Work

What happens when reasoning disappears but decisions remain

Marcia Coulter

4/18/20262 min read

photo of white staircase
photo of white staircase

AI can generate answers quickly.

It can also generate problems just as quickly—only harder to see.

Two of those problems are easy to overlook:

Zombie Ideas and Orphan Decisions.

A Zombie Idea is an idea that continues to influence decisions after the reasoning behind it is gone or no longer valid.

The term itself isn’t new. It’s been used in business contexts—such as in Forbes—to describe outdated ideas that persist beyond their usefulness.

What changes with AI is the speed and scale at which those ideas can be created and reused.

An Orphan Decision is a decision made with AI where the reasoning that led to it is no longer available.

Both exist without AI.

AI just accelerates them.

Here’s why.

Most AI systems operate through regeneration.

You ask a question.
It produces an answer.
Then the next interaction begins from a new starting point.

Even within a single session, earlier reasoning can fade from view.
Across sessions, it’s often gone entirely.

That creates a subtle dynamic.

Each step can look reasonable on its own.

But the connection between steps weakens over time.

Assumptions shift.
Definitions change.
Key decisions lose their original context.

This is how Zombie Ideas form.

An idea is generated. It seems useful. It gets reused.

But the conditions that made it valid may no longer apply—and there’s no easy way to check.

So it continues forward anyway.

This is also how Orphan Decisions form.

A decision gets made based on a chain of reasoning.

Later, the decision remains—but the reasoning doesn’t.

When someone asks “Why are we doing this?” the answer is unclear or reconstructed after the fact.

In low-stakes situations, this is an inconvenience.

In higher-stakes environments—business, healthcare, policy—it becomes a real risk.

Not because the system is unintelligent.

But because the reasoning is not stable.

This is the key distinction.

The issue isn’t that AI produces bad outputs.

It’s that it doesn’t reliably preserve the structure behind those outputs.

Which means the faster we move, the more we generate—

the more likely we are to accumulate ideas and decisions that are disconnected from their origins.

These problems aren’t about intelligence.

They’re about continuity.

If we want to use AI in environments that require consistency, traceability, and accountability, we need to address that directly.

Not by slowing AI down.

But by ensuring that reasoning doesn’t disappear between steps.

Because without stable reasoning, speed becomes a liability.

With it, speed becomes an advantage.

If you’re interested in how this can be addressed in practice, I’m happy to share a short working example.