Anthropic's Claude Mythos Preview, released in early April 2026, commands a 68.5% implied probability as the top math AI model by end-May, topping leaderboards like MATH-500 (94.9%) and BRUMO per May 12 updates from BenchLM and LLM-Stats, outpacing rivals through superior reasoning on competition-level problems such as AIME and HMMT. OpenAI's GPT-5.4 series trails at 20% despite strong FrontierMath (38%+) and AIME (95%+) scores, reflecting trader caution amid Anthropic's momentum. Google's Gemini 3.1 Pro holds 10% on Deep Think gains in scientific math, but lacks decisive edges. With Google I/O looming and potential OpenAI GPT-5.5 drops, markets await catalysts that could erode Anthropic's lead before resolution.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket. Это не является торговой рекомендацией и не влияет на то, как разрешается этот рынок. · ОбновленоWhich company has the best Math AI model end of May?
Which company has the best Math AI model end of May?
Anthropic 66%
OpenAI 21%
Google 11%
ByteDance <1%
$104,668 Объем
$104,668 Объем

Anthropic
66%

OpenAI
21%

11%

ByteDance
1%

DeepSeek
1%

Meta
1%

xAI
1%

Baidu
<1%

Moonshot
<1%

Z.ai
<1%

Alibaba
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Anthropic 66%
OpenAI 21%
Google 11%
ByteDance <1%
$104,668 Объем
$104,668 Объем

Anthropic
66%

OpenAI
21%

11%

ByteDance
1%

DeepSeek
1%

Meta
1%

xAI
1%

Baidu
<1%

Moonshot
<1%

Z.ai
<1%

Alibaba
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Открытие рынка: Apr 27, 2026, 5:49 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic's Claude Mythos Preview, released in early April 2026, commands a 68.5% implied probability as the top math AI model by end-May, topping leaderboards like MATH-500 (94.9%) and BRUMO per May 12 updates from BenchLM and LLM-Stats, outpacing rivals through superior reasoning on competition-level problems such as AIME and HMMT. OpenAI's GPT-5.4 series trails at 20% despite strong FrontierMath (38%+) and AIME (95%+) scores, reflecting trader caution amid Anthropic's momentum. Google's Gemini 3.1 Pro holds 10% on Deep Think gains in scientific math, but lacks decisive edges. With Google I/O looming and potential OpenAI GPT-5.5 drops, markets await catalysts that could erode Anthropic's lead before resolution.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket. Это не является торговой рекомендацией и не влияет на то, как разрешается этот рынок. · Обновлено
Не доверяй внешним ссылкам.
Не доверяй внешним ссылкам.
Часто задаваемые вопросы