Skip to main content
icon for Anthropic farà un accordo con il Pentagono entro...?

Anthropic farà un accordo con il Pentagono entro...?

icon for Anthropic farà un accordo con il Pentagono entro...?

Anthropic farà un accordo con il Pentagono entro...?

$131,857 Vol.

31 mag 2026
Polymarket

$131,857 Vol.

Polymarket

31 maggio

$13,305 Vol.

14%

30 giugno

$23,932 Vol.

25%

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by June 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.The Pentagon’s May 1 agreements with OpenAI, Google, xAI, Microsoft, Amazon Web Services, Nvidia and others for classified AI use—explicitly bypassing Anthropic—remain the dominant factor shaping trader sentiment. The exclusion stems from Anthropic’s refusal to drop contractual limits on its Claude large language models for domestic surveillance or fully autonomous weapons, prompting the Department of Defense to designate the company a supply chain risk and terminate prior contracts worth up to $200 million. While negotiations have periodically resumed since the February-March impasse, no new deal has materialized, and rival labs have already filled the gap with unrestricted “any lawful use” terms. Traders are watching for any shift in Anthropic’s safety stance or Pentagon policy changes ahead of near-term resolution deadlines, though recent reporting shows little movement toward compromise.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volume
$131,857
Data di fine
30 giu 2026
Mercato aperto
Apr 27, 2026, 11:41 AM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by June 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.The Pentagon’s May 1 agreements with OpenAI, Google, xAI, Microsoft, Amazon Web Services, Nvidia and others for classified AI use—explicitly bypassing Anthropic—remain the dominant factor shaping trader sentiment. The exclusion stems from Anthropic’s refusal to drop contractual limits on its Claude large language models for domestic surveillance or fully autonomous weapons, prompting the Department of Defense to designate the company a supply chain risk and terminate prior contracts worth up to $200 million. While negotiations have periodically resumed since the February-March impasse, no new deal has materialized, and rival labs have already filled the gap with unrestricted “any lawful use” terms. Traders are watching for any shift in Anthropic’s safety stance or Pentagon policy changes ahead of near-term resolution deadlines, though recent reporting shows little movement toward compromise.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Volume
$131,857
Data di fine
30 giu 2026
Mercato aperto
Apr 27, 2026, 11:41 AM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.

Fai attenzione ai link esterni.

Domande frequenti

"Anthropic farà un accordo con il Pentagono entro...?" è un mercato predittivo su Polymarket con 3 possibili esiti dove i trader comprano e vendono azioni in base a ciò che credono accadrà. L'esito attualmente in testa è "30 giugno" a 25%, seguito da "31 maggio" a 14%. I prezzi riflettono probabilità aggregate in tempo reale. Ad esempio, un'azione quotata a 25¢ implica che il mercato assegna collettivamente una probabilità di 25% a quell'esito. Queste quote cambiano continuamente man mano che i trader reagiscono a nuovi sviluppi e informazioni. Le azioni nell'esito corretto possono essere riscattate per $1 ciascuna alla risoluzione del mercato.

Ad oggi, "Anthropic farà un accordo con il Pentagono entro...?" ha generato $131.9K in volume totale di trading dal lancio del mercato il Mar 6, 2026. Questo livello di attività di trading riflette un forte coinvolgimento della comunità Polymarket e contribuisce a garantire che le quote attuali siano informate da un ampio pool di partecipanti al mercato. Puoi seguire i movimenti di prezzo in tempo reale e fare trading su qualsiasi esito direttamente su questa pagina.

Per fare trading su "Anthropic farà un accordo con il Pentagono entro...?", esplora i 3 esiti disponibili elencati in questa pagina. Ogni esito mostra un prezzo corrente che rappresenta la probabilità implicita del mercato. Per prendere una posizione, seleziona l'esito che ritieni più probabile, scegli "Sì" per fare trading a suo favore o "No" per fare trading contro di esso, inserisci il tuo importo e clicca "Trading". Se il tuo esito scelto è corretto alla risoluzione del mercato, le tue azioni "Sì" pagano $1 ciascuna. Se è errato, pagano $0. Puoi anche vendere le tue azioni in qualsiasi momento prima della risoluzione se vuoi consolidare un profitto o limitare una perdita.

L'attuale favorito per "Anthropic farà un accordo con il Pentagono entro...?" è "30 giugno" a 25%, il che significa che il mercato assegna una probabilità di 25% a quell'esito. L'esito successivo più vicino è "31 maggio" a 14%. Queste quote si aggiornano in tempo reale man mano che i trader comprano e vendono azioni, quindi riflettono l'ultima visione collettiva di ciò che è più probabile che accada. Controlla frequentemente o aggiungi questa pagina ai preferiti per seguire come cambiano le quote man mano che emergono nuove informazioni.

Le regole di risoluzione per "Anthropic farà un accordo con il Pentagono entro...?" definiscono esattamente cosa deve accadere affinché ogni esito venga dichiarato vincitore — comprese le fonti di dati ufficiali utilizzate per determinare il risultato. Puoi consultare i criteri completi di risoluzione nella sezione "Regole" di questa pagina sopra i commenti. Ti consigliamo di leggere attentamente le regole prima di fare trading, poiché specificano le condizioni precise, i casi limite e le fonti che regolano come viene risolto questo mercato.