Traders see Anthropic as the clear leader for the best math AI model by end of May, with its 70.5% implied probability reflecting recent Claude Opus 4.6 and 4.7 releases that deliver strong results on reasoning-heavy benchmarks such as AIME and MATH variants. These models emphasize deliberate, multi-step thinking modes that improve performance on complex problems, giving Anthropic an edge over competitors in trader assessments of current capabilities. Google’s Gemini 3 series sits at 15.5% on the strength of consistent high-school and competition math scores, while OpenAI’s GPT-5 and o3 variants hold 12.0% amid ongoing refinements to their reasoning stack. With the resolution date approaching, any new benchmark releases or model updates in the coming weeks could still shift sentiment, though Anthropic’s recent demonstrated consistency on math tasks underpins the current market consensus.
สรุปจาก AI ทดลองที่อ้างอิงข้อมูลจาก Polymarket ไม่ใช่คำแนะนำในการเทรดและไม่มีผลต่อการตัดสินตลาดนี้ · อัปเดตแล้วAnthropic 71%
Google 16%
OpenAI 12%
xAI <1%
$120,915 ปริมาณ
$120,915 ปริมาณ

Anthropic
71%

16%

OpenAI
12%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Anthropic 71%
Google 16%
OpenAI 12%
xAI <1%
$120,915 ปริมาณ
$120,915 ปริมาณ

Anthropic
71%

16%

OpenAI
12%

xAI
1%

ByteDance
1%

Z.ai
1%

DeepSeek
1%

Meta
1%

Baidu
<1%

Alibaba
<1%

Moonshot
<1%

Amazon
<1%

Mistral
<1%

Meituan
<1%

Microsoft
<1%
Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
ตลาดเปิดเมื่อ: Apr 27, 2026, 5:49 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Traders see Anthropic as the clear leader for the best math AI model by end of May, with its 70.5% implied probability reflecting recent Claude Opus 4.6 and 4.7 releases that deliver strong results on reasoning-heavy benchmarks such as AIME and MATH variants. These models emphasize deliberate, multi-step thinking modes that improve performance on complex problems, giving Anthropic an edge over competitors in trader assessments of current capabilities. Google’s Gemini 3 series sits at 15.5% on the strength of consistent high-school and competition math scores, while OpenAI’s GPT-5 and o3 variants hold 12.0% amid ongoing refinements to their reasoning stack. With the resolution date approaching, any new benchmark releases or model updates in the coming weeks could still shift sentiment, though Anthropic’s recent demonstrated consistency on math tasks underpins the current market consensus.
สรุปจาก AI ทดลองที่อ้างอิงข้อมูลจาก Polymarket ไม่ใช่คำแนะนำในการเทรดและไม่มีผลต่อการตัดสินตลาดนี้ · อัปเดตแล้ว
ระวังลิงก์ภายนอก
ระวังลิงก์ภายนอก
คำถามที่พบบ่อย