Terrorism, home invasions, or even child sexual exploitation… According to the European Police Agency “Europol”, all these are crimes that criminals may be able to commit with the help of the “GBT Chat” platform, which has become famous recently.
The European police agency, Europol, warned Monday of the risk that criminals could take advantage of the artificial intelligence platform, ChatGBT, to engage in fraud or commit cybercrimes.
From phishing to disinformation to malware, ill-intentioned people may be quick to exploit the rapidly evolving capabilities of chatbots, according to a new report by the European Police Agency.
Created by US startup OpenAI, the ChatGBT platform was launched in November and quickly became a hit, with users impressed with its ability to clearly and accurately answer difficult questions, especially writing songs or code, and even succeeding in Exams.
“The potential exploitation of this type of AI system by criminals presents bleak prospects,” said Europol, based in The Hague.
The new “Innovation Lab” at Europol experimented with the use of chatbots in general, but focused on the “ChatGPT” platform because it is the most popular and used.
“Terror or home invasion”
The agency stressed that criminals may use ChatGBT to “significantly speed up the search process” in areas where they do not understand anything, such as writing a script to commit a fraud or providing information about “how to break into a house, terrorism or crime.” cyber or sexual exploitation of children.
Europol warned that the chatbot’s ability to imitate writing patterns made it highly effective in preparing “phishing” emails, and its ability to rapidly produce text made it “ideal for propaganda and disinformation purposes”.
The ChatGBT platform can also be used to write software, without technical knowledge.
Although ChatGBT is subject to controls such as content moderation that prevents the bot from answering questions rated harmful or biased, this can be circumvented, according to Europol.
The agency noted that artificial intelligence is still in its infancy, and expects its “capabilities to improve over time.”
The agency stressed “the utmost importance of raising awareness in this regard, in order to ensure that any potential vulnerability is discovered and addressed as soon as possible.”