Should I set up a personal AI agent to help me with my daily tasks?
—Looking for help
As a general rule, I believe that relying on any type of automation in daily life is dangerous when taken to the extreme and potentially alienating even when used in moderation, especially when it comes to personal interactions. An AI agent that organizes my to-do list and collects online links to read more? Fabulous. An AI agent that automatically messages my parents every week with a quick update on my life? Horrific.
However, the strongest argument for not including more generative AI tools in your daily routine remains the environmental impact that these models continue to have during training and generating results. With all that in mind, I dug into the WIRED archive, published during the glorious dawn of this disaster we call the Internet, to find more historical context for your question. After some searching, I came back convinced that you’re probably already using AI agents every day.
The idea of AI agents, or, God forbid, “agent AI,” is the current buzzword for all the tech leaders trying to hype up their recent investments. But the concept of an automated assistant dedicated to completing software tasks is far from a new idea. Much of the discourse around “software agents” in the 1990s reflects the current conversation in Silicon Valley, where technology company leaders now promise an incoming flood of AI-powered generative agents trained to perform online tasks on our behalf.
“One problem I see is that people will wonder who is responsible for an agent’s actions,” reads a WIRED interview with MIT professor Pattie Maes, originally published in 1995. “Especially things like agents occupying too much time on a machine or buying something you don’t want on your behalf. Agents will raise many interesting questions, but I am convinced that we will not be able to live without them.
I called Maes in early January to hear how his perspective on AI agents had changed over the years. She’s as optimistic as ever about the potential of personal automation, but is convinced that “extremely naive” engineers don’t spend enough time addressing the complexities of human-computer interactions. In fact, he says, his recklessness could spark another AI winter.
“The way these systems are built, right now, is optimized from a technical standpoint, from an engineering standpoint,” he says. “But they are not optimized at all for human design issues.” It focuses on how AI agents are still easily fooled or resort to biased assumptions, despite improvements in the underlying models. And misplaced trust leads users to trust responses generated by AI tools when they shouldn’t.
To better understand other potential dangers to personal AI agents, let’s divide the confusing term into two distinct categories: those that feed you and those that represent you.
Food agents are algorithms with data about your habits and tastes that search through large amounts of information to find what is relevant to you. Sounds familiar, right? Any social media recommendation engine that fills a timeline with personalized posts or an incessant ad tracker that shows me those mushroom gummies for the thousandth time on Instagram could be considered a personal AI agent. As another example from the 1990s interview, Maes mentioned a newsgathering agent prepared to bring him the articles he wanted. Sounds like my Google News home page.