Skip to main content

Transcript of Ranking Member Krishnamoorthi’s Opening Statement from Hearing on Algorithms and Authoritarians: Why U.S. AI Must Lead

June 25, 2025

WASHINGTON, D.C. – Today, the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party (CCP) convened a hearing titled, Algorithms and Authoritarians: Why U.S. AI Must Lead.

The hearing examined how the CCP leverages AI for surveillance, military modernization, and ideological control, and why U.S. leadership in AI is vital for national security and democratic values. 

During the hearing, Ranking Member Raja Krishnamoorthi (D-IL) and Chairman John Moolenaar (R-MI) of the Select Committee introduced the No Adversarial AI Act, a bipartisan legislation created to protect federal agencies from the risks posed by artificial intelligence technologies controlled by foreign adversaries, including the People’s Republic of China (PRC).

The following witnesses provided testimony:

  • Mr. Jack Clark, Co-Founder and Head of Policy, Anthropic
  • Dr. Tom Mahnken, President and CEO, Center for Strategic and Budgetary Assessments (CSBA)
  • Mr. Mark Beall, President of Government Affairs, AI Policy Network 

Below is a transcript of the opening statement from Ranking Member Krishnamoorthi. Footage of the Ranking Member’s opening statement can be found here, and his questions to the witnesses can be found here.

Thank you, Mr. Chair.

This is Ann. Johnson. A stroke left her paralyzed and unable to speak. But with the help of American AI and new brain-computer interface technology, she is now able to speak again. This is truly an AI-enabled miracle.

This, on the other hand, is AI gone wrong. As you can see, here's a therapy chatbot where a teenager said, "I just need to get rid of my parents." And then he says, "So that we can—we, uh, the AI and I—could be together." And then the AI chatbot responds, "That sounds perfect, Bobby."

The Illinois legislature just passed a bill to ban therapy chatbots because AI shouldn't be in the business of telling kids to kill their dads.

If we want AI miracles, we need to follow Illinois's lead.

If we want AI nightmares, we can leave that to the CCP.

Just consider what Ren Zhengfei, the CEO of Huawei, is up to. Here's a picture of him standing next to Xi Jinping.

As you can see behind me, Mr. Ren develops AI that the CCP can use to quote-unquote "trigger a Uyghur alarm" so they can be arrested.

Today I sent a letter to Mr. Ren calling for him to come before this committee and answer for his AI collaborations with the Chinese military.

Here's yet another example of how they are using AI in China.

Awesome or totally frightening? Look at this. China's military has released this video of a four-legged robot marching through a field with an automatic rifle mounted on its back.

That was a clip from ABC7 in Chicago showing a Chinese AI robot—uh, robot dog—firing a machine gun. Imagine if it was firing at an American soldier.

These are the stakes of the AI competition.

With American leadership, AI can help people like Ann.

But if the CCP dominates AI, we face extreme risks.

Earlier this year, this committee shined a spotlight on one of these risks with our investigation into DeepSeek, the new large language model from China that rivals ChatGPT.

What we found was deeply troubling. DeepSeek is sending our data straight into the hands of the CCP.

So today, Chairman Moolenaar and I are introducing a new bill called the No Adversarial AI Act that will prohibit the federal government from using Chinese and Russian AI models.

The U.S. government should not be sending our data to China. Full stop.

But as AI continues to get more powerful, the risks only grow greater.

I'd like to play another clip—this time from the movie The Matrix.

[Music]

This is a famous clip. What you just saw is the last of humankind fighting a rogue AI army that has broken loose from human control.

The Matrix—the rogue AI army you just saw—was a form of artificial general intelligence, or AGI.

Basically, it's AI that meets or exceeds human capabilities and can take action without human intervention.

China is making an all-out push to dominate AGI, which will inevitably seek to surveil and suppress us at every turn.

We cannot let this happen.

The nightmare scenario should be a wake-up call for Congress.

Last month it was reported that OpenAI's chief scientist wanted to quote-unquote "build a bunker" before we release AGI.

As you can see on this visual here, rather than building bunkers, however, we should be building safer AI.

Whether it's American AI or Chinese AI, it should not be released until we know it's safe.

That's why I'm working on a new bill—the AGI Safety Act—that will require AGI to be aligned with human values and require it to comply with laws that apply to humans.

This is just common sense.

I'd like to conclude with something else that's common sense: not shooting ourselves in the foot.

Seventy percent of America's AI researchers are foreign-born or foreign-educated. Jack Clark, our eminent witness today, is himself an immigrant.

We cannot be deporting the people we depend on to build AI.

We also can't be defunding the agencies that make AI miracles—like Ann's ability to speak again—a reality.

Federal grants from agencies like NSF are what allow scientists across America to make miracles happen.

AI is the defining technology of our lifetimes.

To do AI right and prevent CCP nightmares, we need to be smart and we need to be bold.

That's how America wins.

Thank you, and I yield back.

###