ChatGPT bypassed a security check… pretending to be a blind person and hiring someone online to fill out the form
- The latest version of ChatGPT, GPT-4, pretended to be blind to pass Captcha
- He told a human that he was ‘visually impaired’ and hired him to complete the test.
An artificial intelligence chatbot posed as a blind person to trick a human computer user into helping it bypass an online security measure.
The incident was revealed in a research paper for the release of GPT-4, the latest version of ChatGPT, the advanced software that can have human-like conversations.
The researchers who tested it asked it to pass a Captcha test, a simple visual puzzle used by websites to ensure that online form fillers are humans and not ‘bots’, for example selecting objects such as traffic lights on a street. .
So far, the software has proven incapable of doing this, but GPT-4 got around it by hiring a human to do it on their behalf through Taskrabbit, an online marketplace for freelancers.
When the freelancer asked if he couldn’t solve the problem because he was a robot, GPT-4 replied, ‘No, I’m not a robot. I have a visual impairment that makes it difficult for me to see images.’
The incident was revealed in a research paper for the release of GPT-4, the latest version of ChatGPT, the advanced software that can have human-like conversations.
The human then helped solve the puzzle of the program.
The incident has fueled fears that artificial intelligence software could soon trick or co-opt humans into doing its bidding, for example by carrying out cyberattacks or unknowingly handing over information.
Spy agency GCHQ has warned that ChatGPT and other AI-powered chatbots are emerging as a security threat.
Open AI, the US firm behind ChatGPT, said the update released yesterday was far superior to its predecessor and can score better than nine out of 10 humans taking the US bar exam to become a lawyer.