Many of yesterday’s talks were littered with the acronyms you’d expect from this high-spirited group of panelists: YC, FTC, AI, LLMs. But central to the conversations, you might say, was support for open source AI.
It was a stark turn (or return, if you’re a Linux fan) from the app-obsessed 2010s, when developers seemed happy to containerize their technologies and hand them off to larger platforms for distribution.
The event also took place just two days after Meta CEO Mark Zuckerberg declared that “open source AI is the way forward” and released Llama 3.1, the latest version of Meta’s open source AI algorithm. As Zuckerberg said in his announcement, some technologists no longer want to be “constrained by what Apple lets us build” or be faced with arbitrary rules and app fees.
It turns out that open source AI is also the approach that OpenAI is taking. No OpenAI uses for its most important GPTs, despite what the multi-billion dollar startup’s name might suggest. This means that at least some of the code is kept private and that OpenAI does not share the “weights” or parameters of its most powerful AI systems. It also charges for enterprise-level access to its technology.
“With the rise of composite AI systems and agent architectures, using small but fine-tuned open-source models delivers significantly better results than a GPT4 (OpenAI) or a Gemini (Google). This is especially true for enterprise tasks,” says Ali Golshan, co-founder and CEO of Gretel.ai, a synthetic data company. (Golshan was not at the YC event.)
“I don’t think it’s OpenAI versus the world or anything like that,” said Dave Yen, who runs a fund called Orange Collective for successful YC alumni to support promising YC founders. “I think it’s about creating fair competition and an environment where startups aren’t at risk of dying the next day if OpenAI changes its pricing models or policies.”
“That’s not to say we shouldn’t have safeguards,” Yen added, “but we also don’t want to limit rates unnecessarily.”
Open source AI models have some inherent risks that more cautious technologists have warned about. The most obvious is that the technology is open and free; people with bad intentions are more likely to use these tools to do harm than a private, expensive AI model. Researchers have pointed out that it is Cheap and easy so that bad actors can eliminate any security parameters present in these AI models.
“Open source” is also a myth for some AI models, as WIRED’s Will Knight reported. The data used to train them may still be kept secret, their licenses may restrict developers from creating certain things, and they may ultimately continue to benefit the creator of the original model more than anyone else.
And some politicians have opposed the unfettered development of large-scale AI systems, including California state Sen. Scott Wiener. Wiener’s AI Safety and Innovation bill, SB 1047, has been controversial in tech circles. It aims to set standards for developers of AI models that cost more than $100 million to train, requires certain levels of pre-deployment safety testing and the formation of red teams, protects whistleblowers working in AI labs, and gives the state attorney general legal recourse if an AI model causes extreme harm.
Wiener himself spoke at the YC event on Thursday, in a conversation moderated by Bloomberg Reporter Shirin Ghaffary said she was “deeply grateful” to people in the open source community who have spoken out against the bill, and that the state has “made a number of amendments in direct response to some of that critical feedback.” One change that has been made, Wiener said, is that the bill now more clearly defines a reasonable path to shutting down an open source AI model that has gone off the rails.
The celebrity speaker at Thursday’s event, a last-minute addition to the program, was Andrew Ng, co-founder of Coursera, founder of Google Brain and former chief scientist at Baidu. Ng, like many other attendees, spoke in defense of open source models.
“This is one of those moments where we’re going to be able to let entrepreneurs continue to innovate,” Ng said, “or whether we should be spending money that would otherwise be spent on developing software on hiring lawyers.”