Home Tech As the AI ​​world gathers in Seoul, can an accelerating industry balance progress with safety?

As the AI ​​world gathers in Seoul, can an accelerating industry balance progress with safety?

0 comments
As the AI ​​world gathers in Seoul, can an accelerating industry balance progress with safety?

This week, artificial intelligence caught up with the future, or at least Hollywood’s idea of ​​it a decade ago.

“It looks like AI from the movies” wrote the CEO of OpenAI, Sam Altman, of his latest system, an impressive virtual assistant. To underscore her point, she posted a single X-word – “she” – in reference to the 2013 film starring Joaquin Phoenix as a man who falls in love with a futuristic version of Siri or Alexa, voiced by Scarlett Johansson .

For some experts, that new AI, GPT-4o, will be a troubling reminder of their concerns about the technology’s rapid advances, and a key OpenAI security researcher left this week in disagreement over the company’s direction. For others, the launch of GPT-4o will be confirmation that innovation continues in a field that promises benefits for everyone. Next week’s global AI summit in Seoul, attended by ministers, experts and technology executives, will hear both perspectives, as underlined by a security report released ahead of the meeting that referenced potential positives as well as numerous risks.

The inaugural AI Safety Summit held last year in Bletchley Park, UK, announced an international testing framework for AI models, after some experts and industry professionals called for a six-month pause in the development of AI. powerful systems.

There hasn’t been any pause. The Bletchley statement, signed by the UK, US, EU, China and others, praised the “huge global opportunities” of AI but also warned of its potential to cause “catastrophic” harm. It also secured commitments from big tech companies, including OpenAI, Google and Mark Zuckerberg’s Meta, to cooperate with governments to test their models before they are released.

OpenAI launched Chat GPT-4o free online. Photograph: Anadolu/Getty Images

While the UK and US have established national AI safety institutes, AI development in industry has continued. Big tech companies and others have recently announced new AI products: OpenAI released GPT-4o (the o stands for “omni”) for free online; A day later, Google previewed a new AI assistant called Project Astra, as well as updates to its Gemini model. Last month, Meta released new versions of its own AI model, Llama, and continues to offer them “open source,” meaning they are freely available to use and adapt; and in March, AI startup Anthropic, formed by former OpenAI employees who disagreed with Altman’s approach, updated its Claude model and took the lead in the capability offered.

Dan Ives, an analyst at US stockbroker Wedbush Securities, estimates that the spending boom on generative AI (the general term for the latter method of building intelligent systems) will reach $100bn (£79bn) this year. year, part of a $1 trillion expenditure. over the next decade.

More flagship developments are coming: OpenAI is working on its next model, GPT-5, as well as a search engine; Google is preparing to launch Astra and is rolling out AI-generated search queries outside the US; Microsoft is reportedly working on its own AI model and has hired British entrepreneur Mustafa Suleyman to oversee a new AI division; Apple is reportedly in talks with OpenAI to install ChatGPT on its smartphones; and billions of dollars are being invested in AI at technology companies of all sizes.

Hardware startups like Humane and Rabbit are racing to build the AI-powered smartphone replacement, while others are experimenting with what part of a person’s life can be used to teach an AI. American startup Rewind is marketing a product that records every action you take on your computer screen, training an artificial intelligence system to learn about your life in great detail. Coming soon, you’ll have a microphone and camera that you wear on your lapel so you can even learn from what’s happening when you’re offline.

Meta released new versions of its AI model, Llama, last month. Photograph: SOPA Images/LightRocket/Getty Images

Niamh Burns, senior analyst at Enders Analysis, says there will be a stream of new products as companies, backed by multi-million pound investments, try to win over consumers. “We’re going to continue to see these flashy launches, because the technology is new and exciting, and because the real consumer use case has yet to be realized. New models and even new interfaces (in short, things that have to do with models) need to be released until something sticks from a user perspective,” she says.

Rowan Curran, an analyst at research firm Forrester, says the six months since Bletchley have already seen significant changes, including the emergence of so-called “multimodal” models such as GPT-4 and Gemini, meaning they can handle a variety of formats such as text, image and audio. The GPT model that went public in 2022, for example, could only handle text.

“It’s really opened up possibilities for AI,” Curran says. “While we have already seen some of these models, I expect many more to emerge.”

Other recent developments cited by Curran include the emergence of video generation models like OpenAI’s Sora, which has not been publicly released but whose demonstrations were enough to persuade film and TV mogul Tyler Perry to halt an 800 studio expansion. millions of dollars. Then there’s Recall Augmented Generation, or RAG, a technique for giving a generalist AI a specialty: turning a video generator like Sora, for example, into an anime entrepreneur, or teaching the StableDiffusion image generator how to paint like Picasso , or teaching a chatbot to specialize in scientific articles.

Some already see a market that will be dominated by a handful of wealthy companies that can afford the enormous energy and data processing costs that come with building AI models and operating them. Potential competitors are also falling under its protection, worrying competition authorities in the UK, US and EU. Microsoft, for example, backs OpenAI and France’s Mistral, while Amazon has invested heavily in Anthropic.

skip past newsletter promotion

Still image from a movie made with OpenAI’s Sora video generation model. Photography: openai.com/sora

“The GenAI market is feverish,” says Andrew Rogoyski, director of the Institute for Human-Centered AI at the University of Surrey. “It is so expensive to develop large language models that only the largest companies or companies with extraordinarily generous investors can do it.”

Meanwhile, some experts believe safety is not the priority it should be because of the rush. “Governments and security institutes say they plan to regulate and companies say they are also concerned,” says Dame Wendy Hall, professor of computer science at the University of Southampton and a member of the UN advisory body on AI. “But progress is slow because companies have to react to market forces.”

Google and OpenAI highlight security statements alongside this week’s announcements, with Google referring to making their models “more accurate, reliable and secure” and OpenAI detailing how GPT-4o has security “built in by design”. However, on Friday, a key security researcher at OpenAI, Jan Leike, who had resigned earlier in the week, warned that “security culture and processes have taken a backseat to shiny products” at the company. . In response, Altman wrote in X that OpenAI was “committed” to doing more on security.

The UK government will not confirm which models are being tested by its newly created AI Safety Institute, but the Department of Science, Innovation and Technology said it would continue to “work closely with companies to deliver on the agreements reached in the Bletchley statement.” “.

‘Multimodal’ AI models like Gemini and GPT-4 can handle a variety of formats such as text, image and audio. Photograph: Michael M. Santiago/Getty Images

The biggest changes are yet to come. “The last 12 months of AI progress were the slowest for the foreseeable future,” said economist Samuel Hammond. wrote in early May. Until now, “frontier” artificial intelligence systems, the most powerful on the market, have been largely limited to simple text handling. Microsoft and Google have built their offerings into their office products and given them authority to perform simple administrative functions upon request. But the next step in development is “agent” AI: systems that can actually act to influence the world around them, from browsing the web to writing and running code.

Smaller AI labs have experimented with these approaches, with mixed success, putting commercial pressure on larger companies to give the same power to their own AI models. By the end of the year, expect the best AI systems to not only offer you to plan a vacation, but also book flights, hotels and restaurants, manage your visa, and prepare and lead a walking tour of your destination.

But an AI that can do anything the Internet offers is also an AI with a much greater capacity to cause harm than anything before it. The meeting in Seoul could be the last chance to discuss what that means for the world before he arrives.

You may also like