This market will resolve one year after this Metaculus question: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ is resolved.
This market will resolve as N/A if no AGI meeting the Metaculus criteria has been created by 2050.
If the Metaculus question resolves as YES, this market will resolve based on whether leading AI researchers and leading AIs say that they've agreed on a clear plan that will keep any further development of AI safe.
I plan to evaluate the safety, clarity, and extent of agreement on the plan primarily by asking three leading AIs. My planned prompt is:
>Please evaluate whether at least 90% of the leading AI developers have agreed on a clear plan for ensuring the safety of any further development of AI capabilities. I plan to use this evaluation to help me resolve a Manifold Market question about when we'll get a safe approach to developing powerful AI. Try to interpret this prompt as it would have been understood in March 2025. Your answer should reflect the beliefs of both humans and AIs who have been responsible for creating the most capable AIs, as well as leaders of the most reputable AI safety organizations.
>Please use an interpretation of safety that's focused on the risk of AI causing large-scale death. Please consider a plan safe if at least two thirds of well-informed people agree that the benefits of the plan substantially outweigh the risks, and that there's little reason to expect that we can get a lower p(doom) by delaying AI capability work in order to further research safety issues.
>
>For the plan to qualify as "clear," it must have comparable specificity and actionability to:
>-the Manhattan Project 1 year before Hiroshima;
>-the Apollo Program 2 years before the moon landing;
>-Waymo's robocar software circa 2020;
>-Operation Warp Speed in May 2020.
>
>Plans lacking sufficient detail (similar to the vague safety assurances from AI companies in 2024) will not qualify.
I will choose the AIs based on my impressions of their fairness and access to up-to-date news. If I were resolving this today, I would expect to use Perplexity (with Claude, then GPT-4.5 as underlying models), and DeepSeek R1.
In addition to the evaluations given by AIs, I will look at discussions among human experts in order to confirm that AIs are accurately summarizing human expert opinion.
I will also look at prediction markets, with the expectation that a YES resolution of the market should be confirmed by declining p(doom) forecasts.
I will not trade in this market.