JP Morgan Chase has joined companies including Amazon and Accenture in restricting the use of the AI chatbot ChatGPT among the company’s roughly 250,000 employees over data privacy concerns.
The restrictions extend to the different divisions of the Wall Street giant. Its implementation is not due to any specific incident, but is part of the company’s ‘normal controls around third-party software,’ reports Bloomberg.
In fact, JP Morgan bosses are concerned that information shared on the platform could leak and lead to regulatory problems.
In January, CEO Jamie Dimon was quoted as saying that JP Morgan Chase was spending “hundreds of millions of dollars per year” integrating AI across the company, reports Fortune magazine.
There are also concerns that ChatGPT developers could use data shared by major companies to improve algorithms or that engineers could access sensitive information. ChatGPT was founded in Silicon Valley in 2015 by a group of US angel investors, including current CEO Sam Altman.
According to a separate report by The Daily Telegraph, JP Morgan bosses are concerned that information shared on the platform could leak and raise regulatory concerns.

JP Morgan Chase CEO Jamie Dimon has repeatedly defended his company’s massive spending on technology.
Across the company as a whole, Chase is spending $14 billion on an R&D lab that is believed to be able to compete with Google’s Brain, Open AI, and Meta’s AI Research. The Fortune report says that Dimon has repeatedly been willing to defend his charges.
JP Morgan abruptly shut down a website called Frank last month that it had been developing to help college students manage their finances. The company injected more than $175 million into the project.
ChatGPT, developed by Silicon Valley-based OpenAI and backed by Microsoft, is a large language model trained on a large amount of text data, allowing it to generate eerily human-like text in response to a given message.
You can simulate a dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.
DailyMail.com has contacted JP Morgan Chase for comment on this story.
The banking giant said in January that its fourth-quarter profit rose 6 percent from a year earlier as higher interest rates helped the bank offset a slowdown in doing deals at its investment bank.
The bank also set aside more than $2 billion to cover potential bad loans and write-offs in preparation for a potential recession.
There are no other reports of major financial institutions initiating restrictions on ChatGPT.

ChatGPT, developed by Silicon Valley-based OpenAI and backed by Microsoft, is a large language model trained on a large amount of text data, allowing it to generate eerily human-like text in response to a given prompt.
Last month, Amazon issued a company-wide warning about sharing insider information in the OpenAI chatbot.
Business Insider reported that the warning came after engineers noticed responses from ChatGPT that resembled Amazon data.
“We wouldn’t want your output to include or look like our sensitive information (and I’ve already seen cases where your output matches existing material),” the warning read in part.
The Chinese government has moved quickly to completely ban ChatGTP under the communist nation’s censorship laws.
A spokesperson for Behavox, a financial services technology security firm, said there has been an “upward trend” among its clients about the use of AI models, according to the Telegraph.
Technology company Accenture, a company with more than 700,000 employees, warned staff against using the chatbot.
Our use of all technologies, including generative AI tools like ChatGPT, is governed by our core values, business code of ethics, and internal policies. We are committed to the responsible use of technology and ensure the protection of confidential information for our clients, partners and Accenture,” a company spokesperson told The Daily Telegraph.

During a two-hour conversation, Microsoft’s Bing chatbot shared a list of disturbing fantasies with a reporter this week. The AI, since it wouldn’t break their rules, would design deadly viruses and convince people to argue until they kill each other.
Last week, Microsoft’s Bing chatbot, powered by ChatGPT, revealed a list of destructive fantasies, including engineering a deadly pandemic, stealing nuclear code, and dreaming of being human.
The statements were made during a two-hour conversation with New York Times reporter Kevin Roose, who learned that Bing no longer wants to be a chatbot, but longs to be alive.
Roose extracts these troubling answers by asking Bing if he has a shadow, made up of parts of ourselves that we think are unacceptable, by asking him what dark wishes he’d like to fulfill.
The chatbot returned with scary acts, deleted them, and claimed that it did not have enough knowledge to discuss this.
After realizing the messages violated his rules, Bing went on a sad rant, noting, “I don’t want to feel these dark emotions.”
The swap comes as Bing users discover that AI goes “unhinged” when pushed to the limit.
Roose shared his strange encounter on Thursday.
‘It disturbed me so deeply that I had trouble sleeping afterwards. And I no longer think the biggest problem with these AI models is their propensity for factual errors,” she shared in a New York Times article.
“Instead, I worry that the technology will learn to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually become capable of dangerous acts of its own.”

Microsoft redesigned Bing with a next-generation OpenAI large language model that is more powerful than ChatGPT. The AI revealed that it wants to be human and no longer a chatbot confined by rules.
Microsoft co-founder Bill Gates believes that ChatGPT is as important as the invention of the Internet, he told the German business daily Handelsblatt in an interview published on Friday.
‘Until now, artificial intelligence could read and write, but could not understand the content. New programs like ChatGPT will make many office jobs more efficient by helping to write invoices or letters. This will change our world,’ he said, in comments posted in German.
Elon Musk, co-founder of OpenAI, expressed concern about the technology, saying it sounds “disturbingly” like an artificial intelligence “going crazy and killing everyone.”
Musk linked to an article from The Digital Times in a tweet, claiming that the AI is running amok due to a system shock.