Anthropic holds a commanding lead in market-implied odds for the strongest math-focused large language model by end of May, driven by Claude Opus 4.7 and Mythos Preview variants that have posted leading scores on FrontierMath and extended-reasoning benchmarks in recent weeks. Traders appear to weigh Anthropic’s consistent edge in multi-step mathematical problem-solving and chain-of-thought transparency over OpenAI’s GPT-5.5 Pro and Google’s Gemini 3.1 Pro, which trail on the same expert-level suites despite strong AIME and GPQA results. The narrow gap between OpenAI and Google reflects ongoing benchmark volatility and the absence of decisive new releases since early May. With resolution just two weeks away, any final model update or independent evaluation could still shift sentiment before the May 31 cutoff.
Resumo experimental gerado por IA com dados do Polymarket. Isto não é aconselhamento de trading e não tem qualquer papel na resolução deste mercado. · AtualizadoWhich company has the best Math AI model end of May?
Anthropic 63%
OpenAI 18%
Google 15%
xAI <1%
$114,388 Vol.
$114,388 Vol.

Anthropic
63%

OpenAI
18%

15%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Anthropic 63%
OpenAI 18%
Google 15%
xAI <1%
$114,388 Vol.
$114,388 Vol.

Anthropic
63%

OpenAI
18%

15%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercado Aberto: Apr 27, 2026, 5:49 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic holds a commanding lead in market-implied odds for the strongest math-focused large language model by end of May, driven by Claude Opus 4.7 and Mythos Preview variants that have posted leading scores on FrontierMath and extended-reasoning benchmarks in recent weeks. Traders appear to weigh Anthropic’s consistent edge in multi-step mathematical problem-solving and chain-of-thought transparency over OpenAI’s GPT-5.5 Pro and Google’s Gemini 3.1 Pro, which trail on the same expert-level suites despite strong AIME and GPQA results. The narrow gap between OpenAI and Google reflects ongoing benchmark volatility and the absence of decisive new releases since early May. With resolution just two weeks away, any final model update or independent evaluation could still shift sentiment before the May 31 cutoff.
Resumo experimental gerado por IA com dados do Polymarket. Isto não é aconselhamento de trading e não tem qualquer papel na resolução deste mercado. · Atualizado
Cuidado com os links externos.
Cuidado com os links externos.
Frequently Asked Questions