Anthropic's Claude Opus 4.7 and Mythos preview have solidified trader consensus at 94.5% implied probability for leading coding AI models by May's end, driven by record SWE-bench Verified scores of 93.9% and 77.8% on SWE-bench Pro—surpassing OpenAI's GPT-5.4 by 20%—following the April 16 release and Code with Claude 2026 announcements of agentic features like multi-agent orchestration and doubled usage limits via a SpaceX compute deal. These advances in software engineering benchmarks underscore superior real-world coding prowess amid enterprise adoption surges. Challenges could arise from surprise launches like OpenAI's GPT-5.5 or Google's Gemini 3.1 before resolution, though historical timelines suggest limited upside for rivals in the final weeks.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · UpdatedAnthropic 94.5%
Google 2.6%
OpenAI 1.9%
Moonshot 1.1%
$21,637 Vol.
$21,637 Vol.

Anthropic
95%

3%

OpenAI
2%

Moonshot
1%

DeepSeek
1%

Z.ai
1%

xAI
<1%

Baidu
<1%

Amazon
<1%

Meituan
<1%

Alibaba
<1%

ByteDance
<1%

Meta
<1%

Mistral
<1%

Microsoft
<1%
Anthropic 94.5%
Google 2.6%
OpenAI 1.9%
Moonshot 1.1%
$21,637 Vol.
$21,637 Vol.

Anthropic
95%

3%

OpenAI
2%

Moonshot
1%

DeepSeek
1%

Z.ai
1%

xAI
<1%

Baidu
<1%

Amazon
<1%

Meituan
<1%

Alibaba
<1%

ByteDance
<1%

Meta
<1%

Mistral
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Market Opened: Apr 27, 2026, 5:49 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Coding" Leaderboard tab at https://arena.ai/leaderboard/text/coding-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic's Claude Opus 4.7 and Mythos preview have solidified trader consensus at 94.5% implied probability for leading coding AI models by May's end, driven by record SWE-bench Verified scores of 93.9% and 77.8% on SWE-bench Pro—surpassing OpenAI's GPT-5.4 by 20%—following the April 16 release and Code with Claude 2026 announcements of agentic features like multi-agent orchestration and doubled usage limits via a SpaceX compute deal. These advances in software engineering benchmarks underscore superior real-world coding prowess amid enterprise adoption surges. Challenges could arise from surprise launches like OpenAI's GPT-5.5 or Google's Gemini 3.1 before resolution, though historical timelines suggest limited upside for rivals in the final weeks.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated



Beware of external links.
Beware of external links.
Frequently Asked Questions