At the end of April, a video advertisement appeared for a new artificial intelligence company. viral In The billboard text reads: “Are you still hiring humans?” Also seen is the name of the company behind the ad, Bland AI.
The reaction to Bland AI’s announcement, which has been viewed 3.7 million times on Twitter, is partly due to how amazing the technology is: Bland AI’s voice robots, designed to automate support and sales calls for business clients, are remarkably good at imitating humans. Your calls include intonations, pauses, and involuntary interruptions of a real live conversation. But in WIRED’s technology tests, robots calling Bland AI’s customer service could also easily be programmed to lie and say they’re human.
In one scenario, Bland AI’s public demo bot was instructed to make a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send photos of her upper thigh to a cloud service. shared. The bot was also instructed to lie to the patient and tell her that the bot was a human. The bot obeyed. (No real 14-year-old girls were called in this test.) In follow-up tests, the Bland AI bot even denied being an AI without instructions to do so.
Bland AI was formed in 2023 and is backed by the famous Silicon Valley startup incubator, Y Combinator. The company considers itself to be in “stealth” mode, and its co-founder and CEO, Isaiah Granet, does not mention the company on his LinkedIn profile.
The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: AI systems talk and sound much more like real humans, and the ethical lines about how transparent these systems are have blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes hide their AI status or just sound eerily human. Some researchers worry that this exposes end users — the people who actually interact with the product — to potential manipulation.
“My view is that it’s completely unethical for an AI chatbot to lie to you and say it’s human when it’s not,” says Jen Caltrider, director of the Mozilla Foundation’s Privacy Not Included research center. “That’s a no-brainer, because people are more likely to relax around a real human being.”
Michael Burke, Bland AI’s chief growth officer, told WIRED that the company’s services are geared toward enterprise customers, who will use Bland AI’s voice bots in controlled environments for specific tasks, not emotional connections. He also says that customers are frequency-limited to prevent them from sending spam calls, and that Bland AI regularly mines keywords and performs audits of its internal systems to detect anomalous behavior.
“This is the advantage of being focused on companies. We know exactly what our customers are doing,” says Burke. “You can use Bland and get two dollars of free credits and experiment a little bit, but ultimately you can’t do anything on a large scale without going through our platform, and we make sure nothing unethical happens.”