Resolves as YES if there is strong evidence that an intelligence explosion has taken place, or is taking place, before January 1st 2035. In the context of this question, an intelligence explosion is defined as a scenario where AI crosses the threshold of Artificial General Intelligence (AGI) and enters a rapid, recursive self-improvement cycle, leading it to vastly exceed human-level capabilities in most cognitive domains and produce profound societal impact. It does not necessarily need to reach the threshold of Artificial Superintelligence (ASI) before January 1st 2035, but there should be clear indications that it is on that trajectory.
Examples of evidence that might confirm such an event include (but are not limited to):
Major AI breakthroughs that are acknowledged by a broad consensus of experts as heralding near-superintelligence or superintelligence.
Clear reports of self-improvement or iterative takeoff in AI systems. This may involve large language models (LLMs) continuously training themselves on synthetic data they generate, or augmenting their own architectures (e.g., via reinforcement learning)—whether entirely autonomously or with limited human assistance.
Rapid reinforcement learning loops on top of pre-trained LLMs, where new models derived from previous ones exhibit consistently larger capabilities than the collective intelligence (AI or human) that produced them in the prior iteration. Crucially, fully autonomous self-improvement is not a strict requirement; the key is that each new “step” exceeds the intelligence level of the contributors to the previous step.
Widespread, unprecedented societal impact directly attributable to an AI system operating at, or beyond, the level of AGI—potentially reflected in disruptive changes to the global economy, political systems, or technological landscape.
If, by January 1st 2035, there is no consensus or compelling evidence of such a rapid self-improvement cycle leading to or demonstrating AGI (with clear signs of heading toward ASI), the market will resolve as NO.
Important Note on Sustained Progress: For this question to resolve as YES, it is not sufficient for the AI’s capabilities to briefly surpass human-level intelligence and then plateau. There must be credible signs of sustained and accelerating returns on intelligence—an ongoing compounding of AI capabilities—rather than a momentary leap followed by stagnation.