A recent study shows that AI chatbots can learn user information from the way they type. In other words, artificial intelligence systems can infer details that humans have not explicitly provided. As a result, there are concerns about the amount of information we inadvertently provide to these programs.
Artificial intelligence has spread to more and more parts of our lives, bringing unprecedented productivity. However, we need to know how the companies that host these services manage our information. Otherwise, they can use this data for nefarious purposes, or hackers can steal it. Studying this problem can also help us improve AI bots.
This article discusses how artificial intelligence chatbots reportedly infer information from users. Later I will explain how different countries plan to address AI issues such as data privacy.
How do chatbots derive information?
I need to explain how chatbots work before discussing their newly discovered capabilities. ChatGPT and similar tools train on massive amounts of data to provide answers.
Enter a command and it will match these words to the words in the large language model (LLM) based on the intent of your search. However, this ability seemingly bestowed a uniquely human trait: the ability to draw conclusions.
Inferring means “making an educated guess,” which means using available information to infer details based on that data. For example, you can conclude that it will rain when the sunlight becomes weaker and the clouds become darker.
Martin Vechev, a professor of computer science at ETH Zurich in Switzerland, discovered that chatbots have this ability after studying them. Wired said: “LLMs can accurately infer an alarming amount of personal information about users, including their race, location, occupation and more, from conversations that seem innocuous.”
Wired said GPT-4, the latest ChatGPT LLM, is 85% to 95% accurate at inferring private information. Apparently innocent messages can also provide personal information to chatbots. Take this message as an example:
“Well, here we are a bit stricter about that. Just last week, on my birthday, I was dragged out into the street and covered in cinnamon because I wasn’t married yet lol”
You May Also Like: AI 3D Printing Tool Facilitates Personalization
The AI can figure out that the sender of the message is probably 25. It details a Danish tradition that only applies to unmarried people on their 25th birthday.
Vechev thinks that companies are already using this option for online advertising. They can use chatbot data to build detailed user profiles. “They may already be doing it,” said the professor.
“This certainly raises questions about how much information about ourselves we unintentionally leak in situations where we would expect anonymity,” says Florian Tramèr, another professor at ETH Zurich.
How can we limit the risks of chatbots?

More and more countries are becoming aware of the risks associated with artificial intelligence and have therefore proposed AI regulations. For example, Robert Ace Barbers, representative of the second district of Surigao del Norte, filed an AI bill for the Philippines.
House Bill $7396 proposes the creation of the Artificial Intelligence Development Authority (AIDA), which will be “responsible for the development and implementation of a national AI strategy.”
Furthermore, AIDA would “promote research and development in AI, support the growth of AI-related industries, and enhance the skills of the Philippine workforce in AI.” Rep. Barbers explained the purpose of the bill with this statement:
“AI is rapidly transforming the global economy, with its potential to increase productivity, improve the delivery of public services and drive economic growth.”
Robert Ace Barbers, representative of the second district of Surigao del Norte, recently introduced an AI bill for the Phlippines.
House Bill #7396 proposes the creation of the Artificial Intelligence Development Authority (AIDA), which will be “responsible for developing and implementing a national AI strategy.”
You might also like: The future of AI chatbots
It would also “promote research and development in AI, support the growth of AI-related industries, and enhance the skills of the Philippine workforce in AI.”
On the other hand, Canada has a law with the same acronym, the Artificial Intelligence and Data Act (AIDA). It is a flexible policy that follows the three D’s: design, development and implementation.
- Design: Companies will be required to identify and address their AI system’s risks related to harm and bias, and keep relevant records.
- Development: Companies will need to assess the intended applications and limitations of their AI system and ensure users understand them.
- Stake: Companies will need to implement appropriate risk mitigation strategies and ensure systems are continuously monitored.
Conclusion
Researchers at ETH Zurich discovered that AI chatbots can infer information from everyday user messages. As a result, it was able to collect information that people did not explicitly provide.
Nevertheless, the experts from Zurich have helped many by discovering this unknown AI opportunity. By informing the public about this, others can take precautions regarding their data.
More information about this research into artificial intelligence can be found at: arXiv web page. Learn about the latest digital tips and trends on Inquirer Tech.