Anthropic holds the market lead at 62.5% because its latest Claude Opus 4.7 and Mythos Preview models have posted the highest scores on May 2026 math leaderboards, including 61.7 on key reasoning tests and strong results on AIME and FrontierMath Tier 4 benchmarks. These gains stem from extended chain-of-thought training and agentic thinking modes that improve multi-step mathematical problem solving over prior releases. OpenAI’s GPT-5.5 series, released in late April, delivers competitive FrontierMath performance but trails in consistent math-specific rankings, supporting its 20.5% implied probability. Google’s Gemini 3.1 Pro leads some general reasoning metrics yet sits at 17.5% as traders await further math-focused updates before the end-of-May resolution.
Polymarket डेटा का संदर्भ देने वाला प्रयोगात्मक AI-जनरेटेड सारांश। यह ट्रेडिंग सलाह नहीं है और इस बाज़ार के समाधान में कोई भूमिका नहीं निभाता। · अपडेट किया गयाAnthropic 63%
OpenAI 21%
Google 18%
xAI <1%
$114,284 वॉल्यूम
$114,284 वॉल्यूम

Anthropic
63%

OpenAI
21%

18%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Anthropic 63%
OpenAI 21%
Google 18%
xAI <1%
$114,284 वॉल्यूम
$114,284 वॉल्यूम

Anthropic
63%

OpenAI
21%

18%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
बाज़ार खुला: Apr 27, 2026, 5:49 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic holds the market lead at 62.5% because its latest Claude Opus 4.7 and Mythos Preview models have posted the highest scores on May 2026 math leaderboards, including 61.7 on key reasoning tests and strong results on AIME and FrontierMath Tier 4 benchmarks. These gains stem from extended chain-of-thought training and agentic thinking modes that improve multi-step mathematical problem solving over prior releases. OpenAI’s GPT-5.5 series, released in late April, delivers competitive FrontierMath performance but trails in consistent math-specific rankings, supporting its 20.5% implied probability. Google’s Gemini 3.1 Pro leads some general reasoning metrics yet sits at 17.5% as traders await further math-focused updates before the end-of-May resolution.
Polymarket डेटा का संदर्भ देने वाला प्रयोगात्मक AI-जनरेटेड सारांश। यह ट्रेडिंग सलाह नहीं है और इस बाज़ार के समाधान में कोई भूमिका नहीं निभाता। · अपडेट किया गया
बाहरी लिंक से सावधान रहें।
बाहरी लिंक से सावधान रहें।
अक्सर पूछे जाने वाले प्रश्न