16.1 C
Tuesday, September 26, 2023
HomeTechInvestors should beware of deepfake market manipulation

Investors should beware of deepfake market manipulation


Last month, an event broke online that should make any investor shudder. A deepfake video of an alleged explosion near the Pentagon went viral after being retweeted by outlets like Russia Today, sending US stock markets reeling.

Fortunately, US authorities quickly flooded social media with statements declaring the video to be fake – and RT issued a sheepish statement admitting that “it’s just an AI-generated image”. After that, the markets recovered.

However, the episode has created a sobering backdrop for this week’s visit by Rishi Sunak, the British Prime Minister, to Washington – and his bid for a joint US-UK initiative to address the risks of AI.

There has been a rising alarm recently, both within and outside the tech sector, about the dangers of hyper-intelligent, self-driven AI. Last week, more than 350 scientists released a joint letter warning that “reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

These long-term threats of extinction are the headlines. But experts like Geoff Hinton – an academic and former Google employee who is considered one of the “godfathers of AI” – think the most immediate danger we should be concerned about is not machines running wild independently of each other. , but that people will abuse them.

In particular, as Hinton recently told a meeting at Cambridge University, the proliferation of AI tools could dramatically exacerbate existing cyber problems such as crime, hacking and misinformation.

There is already widespread concern in Washington that deepfakes will poison the 2024 election race. This spring it turned out that they have already had an influence on Venezuelan politics. And this week Ukrainian hackers broadcast a deepfake video of Vladimir Putin on some Russian television channels.

But the financial sphere is now emerging as another concern. Last month Kaspersky’s consulting firm published an ethnographic study of the dark web, which noted “significant demand for deepfakes,” with “per-minute prices of deepfake video (ranging) from $300 to $20,000.” So far, they have mostly been used for cryptocurrency scams, it says. But the deepfake Pentagon video shows how they can also influence mainstream asset markets. “We may see criminals using this for deliberate (market) manipulation,” a US security official tells me.

So is there anything Sunak and US President Joe Biden can do? Not easy. The White House recently held formal discussions on transatlantic AI policy with the EU (from which Britain was excluded as a non-EU member). But this initiative has not yet resulted in a tangible pact. Both sides recognize the desperate need for cross-border AI policies, but EU authorities are more interested in top-down regulatory scrutiny than Washington — and determined to keep the US tech groups at bay.

So some US officials suspect it might be easier to kick off international coordination with a bilateral AI initiative with the UK, given the recent release of a more business-friendly policy brief. Close intelligence ties already exist, through the so-called Five Eyes security pact, and the two countries control much of the Western AI ecosystem (as well as financial markets).

Various ideas have been put forward. One, pushed by Sunak, is to create a government-funded international AI research institute similar to Cern, the particle physics center. The hope is that this can develop AI safely, as well as create AI-enabled tools to combat abuse, such as misinformation.

There is also a proposal to create a global oversight body for AI similar to the International Atomic Energy Agency; Sunak would like this to be located in London. A third idea is to create a global licensing framework for the development and deployment of AI tools. This may include measures to establish “watermarks” that prove the origin of online content and identify deepfakes.

These are all very sensible ideas that can – and should – be put to use. But that probably won’t happen quickly or easily. Creating an AI-style Cern can be very expensive and it will be difficult to quickly gain international support for an IAEA-style oversight body.

And the big problem that haunts any licensing system is how to bring the broader ecosystem onto the net. The technology groups that dominate advanced AI research in the west, such as Microsoft, Google and OpenAI, have indicated to the White House that they would collaborate on licensing ideas. Their business users would almost certainly fall in line as well.

However, it would be much more difficult to lure corporate tiddlers – and criminal groups – into a licensing net. And there is already enough open source AI material that can be abused. For example, the Pentagon video deepfake appears to have used rudimentary systems.

So the indigestible truth is that in the short term the only realistic way to fight back against market manipulation risks is for financiers (and journalists) to do more due diligence – and for government sleuths to pursue cybercriminals. If Sunak and Biden’s rhetoric this week helps raise public awareness on this, it would be a good thing. But no one should be fooled into thinking that knowledge alone will solve the threat. Caveat emptor.


The author of what'snew2day.com is dedicated to keeping you up-to-date on the latest news and information.

Latest stories