When better to hold a conference on artificial intelligence and the myriad ways it is advancing science than in those brief days between the field’s first Nobel Prizes being awarded and the winners heading to Stockholm for the lavish gala ceremony?
It was serendipitous timing for Google DeepMind and the Royal Society, who this week convened the AI for Science Forum in London. Last month, Google DeepMind won the Nobel Prize in chemistry a day after AI took the prize in physics. The atmosphere was celebratory.
Scientists have worked with AI for years, but the latest generation of algorithms has brought us to the brink of transformation, Demis Hassabis, CEO of Google DeepMind, said at the meeting. “If we get it right, it should be an incredible new era of discovery and a new golden age, maybe even a kind of new renaissance,” he said.
There are many things that could ruin sleep. AI “is not a magic bullet,” Hassabis said. To achieve a breakthrough, researchers must identify the right problems, collect the right data, create the right algorithms, and apply them in the right way.
Then there are the dangers. What if AI provokes a backlash, worsens inequality, creates a financial crisis, triggers a catastrophic data breach, and pushes ecosystems to the brink due to its extraordinary energy demands? What happens if it falls into the wrong hands and unleashes AI-designed bioweapons?
Siddhartha Mukherjee, a cancer researcher at Columbia University in New York and author of the Pulitzer Prize-winning book The Emperor of All Evil, suspects these issues will be difficult to navigate. “I think it’s almost inevitable that, at least in my lifetime, there will be some version of an AI Fukushima,” he said, referring to the nuclear accident caused by the 2011 Japanese tsunami.
Many AI researchers are optimistic. In Nairobi, nurses are trialling AI-assisted ultrasounds for pregnant women, bypassing the need for years of training. Materiom, a London company, uses AI to formulate 100% bio-based materials, avoiding petrochemicals. AI has transformed medical imaging, climate models and weather forecasts and is learning how to contain plasmas for nuclear fusion. A virtual cell is on the horizon, a unit of life in silicon.
Hassabis and his colleague John Jumper won the Nobel for AlphaFold, a program that predicts protein structures and interactions. It is used throughout biomedical science, particularly for drug design. Now, researchers at Isomorphic, a subsidiary of Google DeepMind, are strengthening the algorithm and combining it with others to speed up drug development. “We hope that one day in the near future we will reduce the time from years, maybe even decades, to design a drug to months, or maybe even weeks, and that would revolutionize the drug discovery process,” Hassabis said. .
The Swiss pharmaceutical company Novartis has gone further. Beyond designing new drugs, AI accelerates recruitment for clinical trials, reducing a process that could take years to months. Fiona Marshall, president of biomedical research at the company, said another tool helps with inquiries from regulators. “You can find out (whether they’ve asked those questions before) and then predict what’s the best answer that’s likely to give you a positive approval for your drug,” he said.
Jennifer Doudna, who shared a Nobel Prize for the gene-editing tool Crispr, said AI would play “an important role” in making therapies more affordable. Regulators approved the first Crispr treatment last year, but at $2m (£1.6m) for each patient, scores will not benefit. Doudna, who founded the Institute for Innovative Genomics in Berkeley, California, said additional AI-guided work in his lab aims to create a methane-free cow by editing the microbes in the animal’s gut.
A big challenge for researchers is the black box problem: many AIs can make decisions but not explain them, making it difficult to trust the systems. But that may be about to change, Hassabis said, through the equivalent of brain scans to detect AI. “I think in the next five years we will emerge from this era of black boxes that we are currently in.”
The climate crisis could be AI’s biggest challenge. As Google touts AI-powered advancements in floods, wildfires and heat waves According to forecasts, like many large technology companies, it uses more energy than many countries. Today’s large models are one of the main culprits. It can take 10 gigawatt-hours of energy to train a single large language model like OpenAI’s ChatGPT, enough to power 1,000 US homes for a year.
“My view is that the benefits of those systems will far outweigh the energy use,” Hassabis said at the meeting, citing hopes that AI will help create new batteries, room-temperature superconductors and possibly even nuclear fusion. “I think one of these things will probably come to fruition in the next decade, and that will completely and materially change the climate situation.”
He also sees positives in Google’s energy demand. The company is committed to green energy, he said, so demand should drive investment in renewable energy and reduce costs.
Not everyone was convinced. Asmeret Asefaw Berhe, former director of the U.S. Department of Energy’s Office of Science, said advances in AI could cause suffering, adding that nothing causes more concern than energy demand. He called for ambitious sustainability goals. “AI companies involved in this space are investing heavily in renewable energy and hopefully that will spur a faster transition away from fossil fuels. But is that enough? she asked. “It actually has to lead to transformative change.”