Skip to main content
icon for Which company has the best Math AI model end of May?

Which company has the best Math AI model end of May?

icon for Which company has the best Math AI model end of May?

Which company has the best Math AI model end of May?

May 31

May 31

Anthropic 71%

Google 16%

OpenAI 13%

xAI <1%

Polymarket

$120,915 Vol.

Anthropic 71%

Google 16%

OpenAI 13%

xAI <1%

Polymarket

$120,915 Vol.

icon for Anthropic

Anthropic

$19,762 Vol.

71%

icon for Google

Google

$11,581 Vol.

16%

icon for OpenAI

OpenAI

$19,317 Vol.

13%

icon for xAI

xAI

$9,193 Vol.

1%

icon for ByteDance

ByteDance

$7,209 Vol.

1%

icon for Z.ai

Z.ai

$7,004 Vol.

1%

icon for DeepSeek

DeepSeek

$6,880 Vol.

1%

icon for Meta

Meta

$6,837 Vol.

1%

icon for Baidu

Baidu

$7,254 Vol.

<1%

icon for Alibaba

Alibaba

$7,515 Vol.

<1%

icon for Moonshot

Moonshot

$8,279 Vol.

<1%

icon for Amazon

Amazon

$2,116 Vol.

<1%

icon for Mistral

Mistral

$4,546 Vol.

<1%

icon for Meituan

Meituan

$1,909 Vol.

<1%

icon for Microsoft

Microsoft

$1,512 Vol.

<1%

This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET. Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market. Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking. The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.Traders see Anthropic as the clear leader for the best math AI model by end of May, with its 70.5% implied probability reflecting recent Claude Opus 4.6 and 4.7 releases that deliver strong results on reasoning-heavy benchmarks such as AIME and MATH variants. These models emphasize deliberate, multi-step thinking modes that improve performance on complex problems, giving Anthropic an edge over competitors in trader assessments of current capabilities. Google’s Gemini 3 series sits at 15.5% on the strength of consistent high-school and competition math scores, while OpenAI’s GPT-5 and o3 variants hold 12.0% amid ongoing refinements to their reasoning stack. With the resolution date approaching, any new benchmark releases or model updates in the coming weeks could still shift sentiment, though Anthropic’s recent demonstrated consistency on math tasks underpins the current market consensus.

This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET.

Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.

Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.

The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Volume
$120,915
End Date
May 31, 2026
Market Opened
Apr 27, 2026, 5:49 PM ET
This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET. Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market. Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking. The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET. Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market. Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking. The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.Traders see Anthropic as the clear leader for the best math AI model by end of May, with its 70.5% implied probability reflecting recent Claude Opus 4.6 and 4.7 releases that deliver strong results on reasoning-heavy benchmarks such as AIME and MATH variants. These models emphasize deliberate, multi-step thinking modes that improve performance on complex problems, giving Anthropic an edge over competitors in trader assessments of current capabilities. Google’s Gemini 3 series sits at 15.5% on the strength of consistent high-school and competition math scores, while OpenAI’s GPT-5 and o3 variants hold 12.0% amid ongoing refinements to their reasoning stack. With the resolution date approaching, any new benchmark releases or model updates in the coming weeks could still shift sentiment, though Anthropic’s recent demonstrated consistency on math tasks underpins the current market consensus.

This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET.

Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market.

Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking.

The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Volume
$120,915
End Date
May 31, 2026
Market Opened
Apr 27, 2026, 5:49 PM ET
This market will resolve according to the company that owns the model that has the highest arena rank based on the Chatbot Arena LLM Leaderboard (https://lmarena.ai/) when the table under the "Leaderboard" tab for "Math" is checked on May 31, 2026, 12:00 PM ET. Results from the "Rank" column under the "Text Arena | Math" Leaderboard tab at https://arena.ai/leaderboard/text/math-no-style-control with style control off will be used to resolve this market. Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie still remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking. The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.

Beware of external links.

Frequently Asked Questions

"Which company has the best Math AI model end of May?" is a prediction market on Polymarket with 15 possible outcomes where traders buy and sell shares based on what they believe will happen. The current leading outcome is "Anthropic" at 71%, followed by "Google" at 16%. Prices reflect real-time crowd-sourced probabilities. For example, a share priced at 71¢ implies that the market collectively assigns a 71% chance to that outcome. These odds shift continuously as traders react to new developments and information. Shares in the correct outcome are redeemable for $1 each upon market resolution.

As of today, "Which company has the best Math AI model end of May?" has generated $120.9K in total trading volume since the market launched on Apr 27, 2026. This level of trading activity reflects strong engagement from the Polymarket community and helps ensure that the current odds are informed by a deep pool of market participants. You can track live price movements and trade on any outcome directly on this page.

To trade on "Which company has the best Math AI model end of May?," browse the 15 available outcomes listed on this page. Each outcome displays a current price representing the market's implied probability. To take a position, select the outcome you believe is most likely, choose "Yes" to trade in favor of it or "No" to trade against it, enter your amount, and click "Trade." If your chosen outcome is correct when the market resolves, your "Yes" shares pay out $1 each. If it's incorrect, they pay out $0. You can also sell your shares at any time before resolution if you want to lock in a profit or cut a loss.

The current frontrunner for "Which company has the best Math AI model end of May?" is "Anthropic" at 71%, meaning the market assigns a 71% chance to that outcome. The next closest outcome is "Google" at 16%. These odds update in real-time as traders buy and sell shares, so they reflect the latest collective view of what's most likely to happen. Check back frequently or bookmark this page to follow how the odds shift as new information emerges.

The resolution rules for "Which company has the best Math AI model end of May?" define exactly what needs to happen for each outcome to be declared a winner — including the official data sources used to determine the result. You can review the complete resolution criteria in the "Rules" section on this page above the comments. We recommend reading the rules carefully before trading, as they specify the precise conditions, edge cases, and sources that govern how this market is settled.