Artificial Intelligence PACs Aim for Invincibility and Zero Safeguards

Artificial Intelligence PACs Aim for Invincibility and Zero Safeguards
Photo by Hansjörg Keller / Unsplash

"There's no question that what the big tech companies are doing is very bad," stated Trump. Then why have we signed an executive order removing safeguards for these companies on AI, Mr. President? In December of 2025, Donald J. Trump signed an executive order to implement minimally burdensome policies to regulate AI. The order bars any meaningful policy to be passed without constant rejection and scrutiny from AI leaders.

To further this aggression, AI companies are subpoenaing AI Watchdog organizations. AI Watchdog organizations are regulatory groups that aim to monitor, audit, and evaluate AI systems to ensure they are safe, transparent, and legally compliant.

For example, The Midas Project tracks policy changes and violations across major companies and governments.

AI Policy Watchtower | The Midas Project
Track AI policy changes from major companies and governments. Stay updated on accountability and transparency in AI development.

Tyler Johnston, the Executive Director at The Midas Project, was subpoenaed by OpenAI.

Three major PAC's blocking AI regulation and their respective company backings are:

  • Technet: Alphabet, Anthropic, OpenAI, and
  • Chamber of Progress: OpenAI and a16z
  • American Innovators Network: a16z and Y combinator

According to Pew Research Center, 48% of Americans hold little to no trust regarding their belief in their county regulating the use of AI effectively. Additionally, 12% are unsure about their trust regarding AI.

More people trust their own country and the EU to regulate AI than trust the U.S. or China - Pew Research Center

The numbers show that the majority of individuals have concerns regarding AI. Individuals have the right to be concerned when xAI, Anthropic, Google Gemeni, Grok and others violate corporate AI safety policies on a semi-monthly basis as shown in the Midas Project.

To contrast this, children, being a vulnerable population, offer a different take on AI.

Teens are more positive than negative about how AI will impact them. - Pew Research Center

The positivity likely stems from the idea that the tool breaks through the discomfort of growth and assists them in cheating for their assignments.

About 6 in 10 teens say students at their school use AI chatbots to cheat at least sometimes - Pew Research Center

However, at least for some children, they agree that the loss of critical thinking skills is something worth noting.

Teens perception of AI - following their reasons - Pew Research Center

Children's environmental factors and potential negative outcomes for schools can set our generations up for failure in fields where critical thinking and creativity is mandatory.

Furthermore, vulnerable populations confer to AI as if it were a therapist or counselor. Children use AI to obtain emotional support or advice.

More than half of teens say they have used AI chatbots for finding information, doing schoolwork - Pew Research Center