Skip to main content
icon for Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ?

Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ?

icon for Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ?

Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ?

déc. 31

déc. 31

Oui

28% chance
Polymarket

$98,173 Vol.

Oui

28% chance
Polymarket

$98,173 Vol.

This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET. - Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models. - Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training. - Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web. - Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation. Otherwise this market will resolve to "No". The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used. Trader consensus favors no comprehensive U.S. AI safety bill—mandating risk assessments or transparency for frontier models—before 2027 at 72.5%, reflecting stalled federal progress amid the 119th Congress's narrow focus on targeted measures. The White House's March 2026 National AI Legislative Framework prioritized innovation-friendly policies, child safety guardrails, and state preemption over stringent safety mandates, signaling executive preference for light-touch regulation. Recent bipartisan advances, like the Senate Judiciary Committee's April 30 unanimous vote to advance the GUARD Act banning AI companions for minors and requiring age verification, and House appropriations funding for AI security research on May 13, highlight incremental committee actions without floor votes or passage. With 2026 midterms looming, divided priorities and historical challenges in tech legislation underpin the low enactment odds.

This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET.

- Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models.

- Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training.

- Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web.

- Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation.

Otherwise this market will resolve to "No".

The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used.
Volume
$98,173
Date de fin
31 déc. 2026
Marché ouvert
Nov 12, 2025, 5:08 PM ET
This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET. - Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models. - Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training. - Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web. - Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation. Otherwise this market will resolve to "No". The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used.
This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET. - Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models. - Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training. - Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web. - Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation. Otherwise this market will resolve to "No". The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used. Trader consensus favors no comprehensive U.S. AI safety bill—mandating risk assessments or transparency for frontier models—before 2027 at 72.5%, reflecting stalled federal progress amid the 119th Congress's narrow focus on targeted measures. The White House's March 2026 National AI Legislative Framework prioritized innovation-friendly policies, child safety guardrails, and state preemption over stringent safety mandates, signaling executive preference for light-touch regulation. Recent bipartisan advances, like the Senate Judiciary Committee's April 30 unanimous vote to advance the GUARD Act banning AI companions for minors and requiring age verification, and House appropriations funding for AI security research on May 13, highlight incremental committee actions without floor votes or passage. With 2026 midterms looming, divided priorities and historical challenges in tech legislation underpin the low enactment odds.

This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET.

- Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models.

- Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training.

- Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web.

- Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation.

Otherwise this market will resolve to "No".

The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used.
Volume
$98,173
Date de fin
31 déc. 2026
Marché ouvert
Nov 12, 2025, 5:08 PM ET
This market will resolve to "Yes" if a bill that includes at least one of the following provisions is signed into federal law in the United States by December 31, 2026, 11:59 PM ET. - Prohibition on Creation or Release: Forbids the creation or release of specific AI systems or models. - Training Restrictions: Sets limits on how AI systems can be trained, such as restricting access to previously available training data or imposing a maximum limit on the number of parameters used for training. - Usage Restrictions: Prevents AI systems from being used in certain applications, such as interacting with customers, interfacing with other applications, or performing actions on the web. - Human-in-the-Loop Requirements: Requires AI systems to include mechanisms ensuring human oversight or involvement in their operation. Otherwise this market will resolve to "No". The resolution source will be official U.S. federal government (e.g., Congress.gov) however a consensus of credible reporting may also be used.

Méfiez-vous des liens externes.

Questions fréquentes

« Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ? » est un marché de prédiction sur Polymarket avec 2 résultats possibles où les traders achètent et vendent des parts selon ce qu'ils pensent qu'il se passera. Le résultat en tête actuel est « Les États-Unis adoptent-ils une loi sur la sécurité de l'IA avant 2027 ? » à 28%. Les prix reflètent des probabilités en temps réel de la communauté. Par exemple, une part cotée à 28¢ implique que le marché attribue collectivement une probabilité de 28% à ce résultat. Ces cotes changent en permanence. Les parts du résultat correct sont échangeables contre $1 chacune lors de la résolution du marché.

À ce jour, « Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ? » a généré $98.2K en volume total de trading depuis le lancement du marché le Nov 12, 2025. Ce niveau d'activité reflète un fort engagement de la communauté Polymarket et garantit que les cotes actuelles sont alimentées par un large bassin de participants. Vous pouvez suivre les mouvements de prix en direct et trader sur n'importe quel résultat directement sur cette page.

Pour trader sur « Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ? », parcourez les 2 résultats disponibles sur cette page. Chaque résultat affiche un prix actuel représentant la probabilité implicite du marché. Pour prendre position, sélectionnez le résultat que vous estimez le plus probable, choisissez « Oui » pour trader en sa faveur ou « Non » pour trader contre, entrez votre montant et cliquez sur « Trader ». Si votre résultat choisi est correct lors de la résolution, vos parts « Oui » rapportent $1 chacune. S'il est incorrect, elles rapportent $0. Vous pouvez également vendre vos parts avant la résolution.

Le favori actuel pour « Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ? » est « Les États-Unis adoptent-ils une loi sur la sécurité de l'IA avant 2027 ? » à 28%, ce qui signifie que le marché attribue une probabilité de 28% à ce résultat. Ces cotes sont mises à jour en temps réel à mesure que les traders achètent et vendent des parts. Revenez fréquemment ou ajoutez cette page à vos favoris.

Les règles de résolution de « Les États-Unis adoptent-ils un projet de loi sur la sécurité de l'IA avant 2027 ? » définissent exactement ce qui doit se produire pour que chaque résultat soit déclaré gagnant, y compris les sources de données officielles utilisées pour déterminer le résultat. Vous pouvez consulter les critères de résolution complets dans la section « Règles » sur cette page au-dessus des commentaires. Nous recommandons de lire attentivement les règles avant de trader, car elles précisent les conditions exactes, les cas particuliers et les sources.