How Outcomes Change

Method

AI-2027 presents a detailed scenario about how advanced AI systems might develop and interact with human institutions. Many outcomes in the scenario depend on assumptions about how systems are deployed and governed.

This project asks a narrow question: which outcomes change if those assumptions change?

In particular, it compares AI-2027's largely unconstrained deployment model with deployment models that are more bounded, more refusal-capable, and more externally auditable.

What "changes" means here

"Changes" refers to differences in:

  • what actions can be executed
  • what actions can be refused or paused
  • what can be independently verified after the fact

It does not mean a guarantee of safety, alignment, or global adoption.

Response Structure

Each response follows a consistent structure:

  1. What AI-2027 claims here — A neutral summary of the specific scenario element being examined.
  2. Assumptions underneath — The implicit or explicit assumptions that the scenario element depends on.
  3. What changes under different deployment assumptions — How the scenario dynamics change under bounded execution, refusal semantics, and external auditability.
  4. Outcome difference — A concise synthesis of what changes in the scenario's trajectory or decision points.
  5. What this does not solve — An explicit accounting of remaining risks and unresolved problems.