However, “it doesn’t seem like it will be long before this technology can be used to monitor employees,” Elcock says.
Self-censorship
Generative AI poses several potential risks, but there are steps companies and individual employees can take to improve privacy and security. First, don’t include sensitive information in a message from a publicly available tool like ChatGPT or Google’s Gemini, says Lisa Avvocato, vice president of marketing and community at data firm Sama.
When crafting a message, be generic to avoid oversharing. “Ask, ‘Write a proposal template for budget expenses,’ not ‘Here’s my budget, write a proposal for expenses on a sensitive project,’” he says. “Use AI as your first draft and then add the sensitive information you need to include.”
If you use it for research, avoid problems like those seen with Google’s AI overviews by validating what it provides, Avvocato says. “Ask them to provide references and links to their sources. “If you ask AI to write code, you still have to review it, rather than assuming it’s ready.”
Microsoft itself has declared that Copilot must be configured correctly and the “least privileges”—the concept that users should only have access to the information they need must be applied. This is “a crucial point,” says Prism Infosec’s Robinson. “Organizations need to lay the foundation for these systems and not simply rely on technology and assume everything will be fine.”
It’s also worth noting that ChatGPT uses the data you share to train its models, unless you turn it off in settings or use the enterprise version.
List of Guarantees
Companies that integrate generative AI into their products say they are doing everything they can to protect security and privacy. Microsoft is looking forward to describe security and privacy considerations in your Recall product and the ability to control the feature in Settings > Privacy and security > Recovery and snapshots.
Google says Generative AI in Workspace “does not change our fundamental privacy protections to give users choice and control over their data” and stipulates that the information is not used for advertising.
OpenAI reiterates how it maintains security and privacy in their products, while enterprise versions are available with additional controls. “We want our AI models to learn about the world, not private individuals, and we take steps to protect people’s data and privacy,” an OpenAI spokesperson tells WIRED.
OpenAI says it offers ways to control how data is used, including self-service tools to access, exportand delete personal information, as well as the possibility to choose not to use the content for improve your models. ChatGPT Team, ChatGPT Enterprise, and their API are not trained on data or conversations, and their models do not learn from usage by default, according to the company.
Either way, it looks like your AI coworker is here to stay. As these systems become more sophisticated and ubiquitous in the workplace, the risks will only intensify, Woollven says. “We are already seeing the emergence of multimodal AI like GPT-4o that can analyze and generate images, audio and video. So now companies don’t just have to worry about safeguarding text-based data.”
With this in mind, people (and companies) need to adopt the mindset of treating AI like any other third-party service, Woollven says. “Don’t share anything you don’t want broadcast publicly.”