Home Tech The OpenAI whistleblower who died was being considered a witness against the company

The OpenAI whistleblower who died was being considered a witness against the company

0 comments
The OpenAI whistleblower who died was being considered a witness against the company

Suchir Balaji, a former OpenAI engineer and whistleblower who helped train the artificial intelligence systems behind ChatGPT and later said he believed those practices violated copyright law, has died, according to his parents and San Francisco officials. He was 26 years old.

Balaji worked at OpenAI for almost four years before resigning in August. He had been well regarded by his colleagues at the San Francisco company, where a co-founder this week called him one of OpenAI’s strongest contributors and who was essential in developing some of its products.

“We are devastated to learn this incredibly sad news and our hearts go out to Suchir’s loved ones during this difficult time,” a statement from OpenAI said.

Balaji was found dead in his San Francisco apartment on November 26 in what police said “appeared to be a suicide.” During the initial investigation, no evidence of a crime was found.” The city’s chief medical examiner’s office confirmed the manner of death as suicide.

His parents, Poornima Ramarao and Balaji Ramamurthy, said they are still searching for answers and described their son as a “happy, smart and brave young man” who loved to walk and had recently returned from a trip with friends.

Balaji grew up in the San Francisco Bay Area and first came to the fledgling AI research lab for a summer internship in 2018 while studying computer science at the University of California, Berkeley. He returned a few years later to work at OpenAI, where one of his first projects, called WebGPT, helped pave the way for ChatGPT.

“Suchir’s contributions to this project were essential and it would not have been successful without him,” OpenAI co-founder John Schulman said in a social media post in Balaji’s memory. Schulman, who recruited Balaji to his team, said that what had made him such an exceptional engineer and scientist was his attention to detail and his ability to notice subtle or logical errors.

“He had a knack for finding simple solutions and writing elegant code that worked,” Schulman wrote. “He thought about the details of things carefully and rigorously.”

Balaji later went on to organize the huge data sets of online writings and other media used to train GPT-4, the fourth generation of OpenAI’s flagship large language model and a basis for the company’s famous chatbot. It was that work that ultimately made Balaji question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.

He first expressed his concerns to the New York Times, which reported them in a profile of Balaji in October.

He later told the Associated Press that he would “try to testify” in the most serious cases of copyright infringement, calling a lawsuit filed by the New York Times last year the “most serious.” Lawyers for the Times named him in a Nov. 18 court filing as someone who may have “unique and relevant documents” supporting allegations of willful copyright infringement by OpenAI.

His records were also sought by lawyers in a separate case brought by book authors, including comedian Sarah Silverman, according to a court filing.

“It doesn’t seem right to train yourself on people’s data and then compete with them in the market,” Balaji told the AP in late October. “I don’t think you should be able to do that. “I don’t think you can do that legally.”

He told the AP that he had gradually become more disillusioned with OpenAI, especially after internal turmoil that led its board to fire and then rehire CEO Sam Altman last year. Balaji said he was very concerned about how his commercial products were being implemented, including their propensity to emit false information known as hallucinations.

But of the “set of issues” that concerned him, he said, he focused on copyright as what “it was really possible to do something about.”

He acknowledged that it was an unpopular opinion within the AI ​​research community, which is used to mining data from the Internet, but said “they will have to change and it is a matter of time.”

He had not been deposed and it is unclear to what extent his revelations will be admitted as evidence in any legal case after his death. He also published an entry on his personal blog with his opinions on the subject.

Schulman, who resigned from OpenAI in August, said he and Balaji coincidentally left on the same day and celebrated with colleagues that night with dinner and drinks at a San Francisco bar. Another of Balaji’s mentors, co-founder and chief scientist Ilya Sutskever, had left OpenAI several months earlier, which Balaji saw as another impetus to leave.

Schulman said Balaji had told him earlier this year of his plans to leave OpenAI and that Balaji did not believe that better-than-human AI known as artificial general intelligence “was around the corner, as the rest of the company seemed.” believe”. ”. The young engineer expressed interest in pursuing a doctorate and exploring “some more out-of-the-box ideas about how to develop intelligence,” Schulman said.

Balaji’s family said a memorial is being planned for later this month at the Indian Community Center in Milpitas, California, not far from his hometown of Cupertino.

In the US, you can call or text National Suicide Prevention Lifeline at 988, chat at 988lifeline.orgeither HOME text at 741741 to connect with a crisis counselor. In the United Kingdom and Ireland, Samaritans You can contact them by calling freephone 116 123 or emailing jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline es 13 11 14. Other international helplines can be found at friends.org

The Associated Press and OpenAI have a license and technology agreement that allows OpenAI to access some of AP’s text files.

You may also like