On Tuesday of this week, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and Christina Montgomery, who is IBM’s chief privacy trust officer, as all three testified before the Senate Judiciary Committee for more than three hours. The senators were largely focused on Altman because he currently runs one of the most powerful companies in the world and because Altman has repeatedly asked them to help regulate his work. (Most CEOs are begging Congress to leave their industries alone.)
Although Marcus has long been known in academic circles, his star has been on the rise recently thanks to his newsletter (“The road to AI we can rely on“), a podcast (“People versus machines“), and his recognizable uneasiness around the unchecked rise of AI. In addition to this week’s hearing, for example, he has appeared this month on Bloomberg television and in the New York Times Sunday magazine And Wired among other places.
Because this week’s hearing seemed truly historic — Senator Josh Hawley characterized AI as “one of the most technological innovations in human history,” while Senator John Kennedy was so enamored with Altman that he asked Altman to choose his own regulators — wanted we also talk to Marcus to discuss the experience and see what he knows about what happens next.
Are you still in Washington?
I’m still in Washington. I’m meeting with legislators and their staff and several other interesting people and trying to see if we can deliver on the kind of things I talked about.
You taught at NYU. You have co-founded a number of AI companies, including An with famed roboticist Rodney Brooks. I interviewed Brooks onstage in 2017 and he said at the time that he didn’t think Elon Musk really understood AI and that he thought Musk was wrong about AI being an existential threat.
I think Rod and I share the skepticism about whether current AI is anything like artificial general intelligence. There are several things you need to take apart. One is are we close to AGI and the other is how dangerous is the current AI we have? I don’t think the current AI we have is an existential threat, but it is dangerous. In many ways I think it is a threat to democracy. That is not a threat to humanity. It will not destroy all people. But it’s a pretty serious risk.
Not so long ago you were arguing Yann LeCun, Meta’s chief AI scientist. I’m not sure what that cover was about – the true meaning of deep learning neural networks?
So LeCun and I debated a lot of things many years. We had a public debate that David Chalmers, the philosopher, led in 2017. Since then I’ve tried to get (LeCun) to have another real debate, but he won’t. He prefers to subtweet me on Twitter and things like that, which I don’t think is the most mature way to have conversations, but since he’s an important figure, I do respond.
One thing I think we disagree (at the moment) is that LeCun thinks it’s fine to use these (big language models) and there’s no harm here. I think he is very wrong about that. There are potential threats to democracy, ranging from disinformation deliberately produced by bad actors, from accidental disinformation – such as the law professor accused of sexual harassment even though he did not commit it – (to the ability to) subtly shape the political beliefs based on training data that the public doesn’t even know about. It’s like social media, only more insidious. You can also use these tools to manipulate other people and probably trick them into anything you want. You can scale them enormously. There are definitely risks here.
You said something interesting about Sam Altman on Tuesday and told the senators that he hadn’t told them what his biggest fear is, what you called “germane,” and referred them to him. What he still hasn’t said is something to do with autonomous weapons, which I spoke to him about a few years ago as one of the biggest concerns. I thought it was interesting that no guns showed up.
We’ve covered a lot of ground, but there’s a lot of things that we haven’t gotten to, including enforcement, which is very important, and national security and autonomous weapons and things like that. There will be more of (this one).
Was there open source versus closed systems?
It barely came up. It’s obviously a very complicated and interesting question. It’s really not clear what the correct answer is. You want people to do independent science. You may want to have some kind of license for things that are going to be deployed on a very large scale, but come with certain risks, including security risks. It’s not clear that we want every bad actor to have access to arbitrarily powerful tools. So there are arguments for it and there are arguments against it, and probably the right answer is to allow a fair amount of open source, but also put some restrictions on what can be done and how it can be deployed.
Any specific thoughts on Meta’s strategy for making her language model into the world for people to tinker with?
I don’t love that (Meta’s AI technology) LLaMA is there to be honest. I think that was a bit careless. And you know, that’s literally one of the genies out of the bottle. There was no legal infrastructure; they didn’t consult anyone about what they were doing, as far as I don’t know. Maybe so, but the decision-making process with that or, say, Bing, is really just: a company decides that we’re going to do this.
But some things companies decide can be detrimental, both in the near future and in the long run. So I think governments and scientists should increasingly have a role in deciding what goes on there (through a kind of FDA for AI), where if you want to make a widespread deployment, you do a trial first. You talk about the cost benefits. You do another test. And finally, if we’re confident that the benefits outweigh the risks, (do you) release at scale. But right now, any company can decide at any time to commit to 100 million customers and have it done without any kind of government or scientific oversight. You have to have a system where impartial authorities can enter.
Where would these impartial authorities come from? Doesn’t everyone who knows something about how these things work already work for a company?
I am not. (Canadian computer scientist) Yoshua Bengio Not. There are many scientists who do not work for these companies. It’s a real concern, how can we get enough of those auditors and how can we incentivize them to do it. But there are 100,000 computer scientists here with some expertise. Not all of them work on contract for Google or Microsoft.
Would you like to play a role in this AI agency?
I’m interested, I think whatever we build should be global and neutral, presumably not for profit, and I think I have a good, neutral voice here that I’d like to share and try to take us to a good place.
How did it feel to sit on the Senate Judiciary Committee? And do you think you will be invited back?
I wouldn’t be surprised if I was invited back, but I have no idea. I was really deeply moved by it and I was really deeply moved to be in that room. It’s a little smaller than on television, I think. But it felt like everyone was there to try and do the best for the US – for humanity. Everyone knew the weight of the moment and in all likelihood the Senators brought their best game. We knew we were there for a reason and we gave it our best shot.