Home Tech Will OpenAI’s $5 billion bet on chatbots pay off? Only if you use them

Will OpenAI’s $5 billion bet on chatbots pay off? Only if you use them

0 comments
Will OpenAI's $5 billion bet on chatbots pay off? Only if you use them

What happens if you build it and they don’t come?

It’s fair to say that the AI ​​boom is losing its luster. Soaring valuations are starting to look shaky compared to the sky-high spending needed to sustain them. Over the weekend, a report from tech site The Information estimated that OpenAI was on track to spend a staggering $5 billion more of what it generates in income this year alone:

If we are right, OpenAI, recently valued at $80bnOpenAI will need to raise more money over the next 12 months or so. We’ve based our analysis on our informed estimates of what OpenAI spends to run its ChatGPT chatbot and train future large language models, plus “ballpark estimates” of what OpenAI’s staffing would cost, based on its past projections and what we know about its hiring. Our conclusion points to why so many investors are concerned about the earnings prospects of conversational AI.

The most pessimistic version of the story is that AI (specifically, chatbots, the expensive and competitive segment of the industry that has captured the public imagination) simply isn’t as good as we’ve been told.

That argument suggests that as adoption has grown and iteration has slowed, most people have had a chance to properly use cutting-edge AI and have begun to realize that it is impressive, but perhaps not useful. The first time you use ChatGPT it is a miracle, but by the hundredth time, the flaws are still apparent and the magic has faded into the background. ChatGPT, you decide, is bunk:

In this article, we argue against the view that when ChatGPT and the like produce false statements, they are lying or even hallucinating, and in favor of the position that the activity they are engaged in is lying… Because these programs cannot possibly care about the truth, and because they are designed to produce truth-appropriate text without any real concern for the truth, it seems appropriate to call their results bullshit.

Train them to come

It is estimated that AI is likely to eliminate only a handful of jobs completely. Photo: Bim/Getty Images/iStockphoto

I don’t think it’s all that bad. But not because the systems are perfect, but because I think the transition to AI is reaching a tipping point earlier. In order for people to try chatbots, realise they’re nonsense and give up, they need to really try them. And that, judging by the tech industry’s response, is starting to be the biggest hurdle. Last Thursday, I reported on how Google is partnering with a network of small businesses and multi-academy trusts to bring AI into the workplace and help enhance workers’ capabilities rather than replace them. Debbie Weinstein, managing director of Google in the UK and Ireland, told me:

Part of what’s difficult about talking about it now is that we don’t really know exactly what’s going to happen. What we do know is that the first step will be to sit down (with partners) and really understand the use cases. If it’s school administrators versus people in the classroom, what are the specific tasks that we really want to accomplish for these people?

If you are a teacher, some of this might be a simple email with ideas on how to use Gemini to plan lessons, some might be formal classroom training, and some might be one-on-one coaching. There will be many different pilot projects with 1,200 people, with each group having about 100 people.

One way of looking at it is that this is simply another positive investment in the skills agenda by a big company. Google in particular has long run digital training programmes, once marketed as the company’s “digital garage”, which help to upskill Britons. More cynically, it is good business to teach people how to use new technologies by teaching them how to use the tools you have. Britons of a certain age will vividly remember “computer science” or “ICT” classes which were a thinly veiled course in how to use Microsoft Office; those older or younger than me learned some basic computer programming. I learned how to use Microsoft Access.

In this case, there’s something deeper at play: Google doesn’t just need to train people to use AI, it also needs to run a test to determine exactly what they need to be trained on. “It’s much more about little everyday hacks to make your work life a little bit more productive and enjoyable than it is about fundamentally revising your understanding of technology,” Weinstein said. “There are tools out there today that can help you do your job a little bit easier. It’s the three minutes you save every time you write an email.

“Our goal is to make sure that everyone can benefit from technology, whether it’s from Google or from other people. And I think the generalizable idea that you would work with tools that can help you make your life more efficient seems like something that everyone can benefit from.”

Ever since ChatGPT came along, it’s been assumed that technology speaks for itself, which is reinforced by the fact that, in a literal sense, it does. But chat interfaces are opaque. Even if you’re dealing with a real human being, it’s still a skill to get the most out of them when you need their help, and it’s a much greater skill if your only way to communicate with them is a text chat.

AI chatbots aren’t people (far from it), so it’s proportionally harder to even determine how they might fit into a typical work pattern. The argument for the technology isn’t “What if there’s nothing?” Of course it is, even despite all the hallucinations and nonsense. Instead, it’s much simpler: what if most people just can’t be bothered to learn how to use it?

Mathsbot’s Gold

Skip newsletter promotion

Google DeepMind has successfully enabled new AI systems to tackle questions from the International Mathematical Olympiad. Photo: Pitinan Piyavatin/Alamy

Meanwhile, in another bit of Google:

Although computers were designed to do mathematical calculations faster than any human, the top level of formal mathematics remains a uniquely human domain. But a breakthrough by Google DeepMind researchers has brought AI systems closer than ever to beating the best human mathematicians at their own game.

A pair of new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle questions from the International Mathematical Olympiad, a global math competition for high school students that has been running since 1959The Olympiad consists of six incredibly difficult questions each year, covering fields such as algebra, geometry and number theory. Winning a gold medal puts you among the best young mathematicians in the world.

The caveats: Google DeepMind’s systems “only” solved four of the six problems, and one of them was solved using a “neurosymbolic” system, which is a lot less like AI than you might expect. All of the problems were manually translated into a programming language called Lean, which allows the system to read them as a formal description of a problem rather than having to parse human-readable text first. (Google DeepMind tried using LLM to do this part, too, but they weren’t very good.)

But still, it is a big step. The International Mathematical Olympiad is hardAnd an AI got a medal? What happens when it gets a gold medal? Is there a radical difference between being able to solve challenges that only the best high school mathematicians can tackle and being able to solve challenges that only the best undergraduates, then graduate students, and then doctors can solve? What changes if a branch of science is automated?

The broader TechScape

Do you really need to save all your photos and texts? Composition: The Guardian/Getty Images

You may also like