The mother of the AI whistleblower found dead in an apparent suicide just weeks after speaking out revealed that her son “felt that AI is a harm to humanity.”
Suchir Balaji, 26, was found dead in his San Francisco home on November 26, three months after accusing his former employer OpenAI of violating copyright laws in the development of ChatGPT.
Now, Balaji’s mother is demanding that police reopen the investigation into his death, saying it “does not seem like a normal situation.”
“We want to leave the issue open,” said his mother, Poornima Ramarao, Insider business information in an interview to talk about the last months of the troubled engineer.
“He felt that AI is harmful to humanity,” Ramarao said.
The young technology genius joined OpenAI believing in its potential to benefit society, particularly attracted by its open source philosophy.
But her mother revealed that her perspective changed dramatically as the company became more commercially focused following the launch of ChatGPT.
She described the horrible moment she realized her son was dead when she saw a stretcher arrive at her San Francisco apartment.
Suchir Balaji, 26, was found dead on November 26, three months after accusing his former employer, OpenAI, of violating copyright laws in the development of ChatGPT.

Now, Balaji’s mother is demanding that police reopen the investigation into his death, saying it “does not seem like a normal situation.”
“I was waiting to see medical help, nurses or someone to get out of the van,” he said.
But a stretcher came. A simple stretcher. I ran and asked the person. He said, “We have a body in that apartment.”
His death came just months after he resigned from OpenAI over ethical concerns and weeks after being named in the New York Times’ copyright lawsuit against the company.
In August, he left OpenAI because he “no longer wanted to contribute to technologies that he believed would bring more harm than good to society,” he said. Reported times.
“If you believe what I believe, you just have to leave,” Balaji had told The Times in one of his last interviews.
Police initially ruled the death a suicide and told Ramarao that CCTV footage showed Balaji had been alone.
However, his parents arranged a private autopsy, completed in early December, which they say yielded worrying results.
“We want to leave the issue open,” Ramarao said. “It doesn’t seem like a normal situation.”

His mother described the horrible moment she realized her son was dead when she saw a stretcher arrive at her apartment in San Francisco.

His parents arranged a private autopsy, completed in early December, which they say produced troubling results.

Balaji was found dead in his San Francisco apartment, seen here, on November 26.
The family is working with an attorney to urge San Francisco police to reopen the case for a “proper investigation.”
Over the past two years, companies like OpenAI have been sued by several individuals and companies over claims over their copyrighted material.
Balaji’s role and knowledge in the legal proceedings against the company were considered “crucial”.
The New York Times was involved in its own lawsuit against OpenAI and its main partner, Microsoft, which denied claims that they had used millions of published articles to report intelligence and, as a result, began competing with the outlet.
On Nov. 18, the outlet filed a letter in federal court naming Balaji as a person with “unique and relevant documents” that would be used in its litigation against OpenAI.
Their lawsuit said: “Microsoft and OpenAI simply take the work product of reporters, journalists, editorial writers, editors and others who contribute to the work of local newspapers, all without regard to the efforts, much less the legal rights, of those create and publish the news that local communities depend on.’
While other researchers have warned about possible future risks from the technology, Balaji told the Times that he believes the risk is much more “immediate” than feared.
“I thought AI could be used to solve intractable problems, like curing diseases and stopping aging,” he said.

While OpenAI, Microsoft and other companies have claimed that their use of internet data to train the technology is considered “fair use,” Balaji did not believe the criteria had been met.
“I thought we could invent some kind of scientist who could help solve them.”
Balaji said he believed the threats posed by chatbots like ChatGPT were destroying the commercial viability of the individuals, companies and internet services that created the digital data used to train such systems.
“This is not a sustainable model for the Internet ecosystem as a whole,” he said.
While OpenAI, Microsoft and other companies have claimed that their use of internet data to train the technology is considered “fair use,” Balaji did not believe the criteria had been met.