Home Politics Trae Stephens has created artificial intelligence weapons and has worked for Donald Trump. In his opinion, Jesus would approve

Trae Stephens has created artificial intelligence weapons and has worked for Donald Trump. In his opinion, Jesus would approve

0 comments
Trae Stephens Clothing Formal Wear Suit Coat Jacket Person Walking Face Head Photography Portrait and Blazer

When I wrote about Anduril in 2018, the company explicitly said it would not build lethal weapons. They are now building fighter jets, underwater drones, and other lethal weapons of war. Why did they make that decision?

We responded to what we saw, not just within our military, but around the world. We want to be aligned with delivering the best capabilities in the most ethical way possible. The alternative is for someone to do it anyway, and we think we can do it better.

Were there any introspective discussions before crossing that line?

There is a constant internal debate about what to build and whether there is an ethical alignment with our mission. I don’t think there is much use in trying to set our own line when the government is the one that actually sets it. They have given clear guidance on what the military is going to do. We are taking a cue from our democratically elected government to tell us what their problems are and how we can be helpful.

What is the proper role of autonomous AI in warfare?

Fortunately, the US Department of Defense has done more work on this topic than any other organization in the world, with the exception of large companies that rely on generative AI models. There are clear rules of engagement that keep humans in the loop. It’s about getting humans out of boring, dirty, and dangerous jobs and making decision-making more efficient, while still keeping the human accountable at the end of the day. That’s the goal of all the policies that have been put in place, regardless of the advances in autonomy in the next five or ten years.

In a conflict, there can be a temptation not to wait for humans to intervene when targets are instantly present, especially with weapons like autonomous fighter jets.

The autonomous program we’re working on for the Fury aircraft (a fighter used by the U.S. Navy and Marine Corps) is called CCA, Collaborative Combat Aircraft. There’s a man in a plane controlling and commanding robot fighter planes and deciding what they do.

What about the drones you’re building that stay in the air until they see a target and then launch themselves at it?

There is a classification of drones called loitering munitions, which are aircraft that seek out targets and then have the ability to attack them kinetically, like a kamikaze. Again, there is a human in the loop who is responsible.

War is chaos. Is there no genuine concern that those principles will be cast aside once hostilities begin?

Humans fight wars, and we are flawed. We make mistakes. Even when we were standing in line and shooting each other with muskets, there was a process for judging violations of the law of combat. I think that will persist. Do I think there will never be a case where an autonomous system is asked to do something that seems like a serious violation of ethical principles? Of course not, because it is still humans who are in charge. Do I think it is more ethical to prosecute a dangerous and messy conflict with robots that are more precise, more discriminating, and less likely to escalate? Yes. Decide No Doing this is continuing to put people in danger.

Photography: Peyton Fulford

I’m sure you’re aware of Eisenhower’s final message about the dangers of a self-serving military-industrial complex. Does that warning affect your approach?

That’s one of the greatest speeches of all time; I read it at least once a year. Eisenhower was articulating a military-industrial complex in which the government is not that different from contractors like Lockheed Martin, Boeing, Northrop Grumman, General Dynamics. There’s a revolving door at the top levels of these companies, and they become power centers because of that interconnectedness. Anduril has been pushing for a more commercial approach that doesn’t rely on that tightly coupled incentive structure. We say, “Let’s build things at the lowest cost, using commercially available technologies, and let’s do it in a way where we take on a lot of the risk.” That avoids some of this potential tension that Eisenhower identified.

You may also like