Following Apple’s product launches this week, WIRED took a deep dive into the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate the security and privacy of data processing locally on individual users’ devices in the cloud. The goal is to minimize the potential exposure of processed data to Apple Intelligence, the company’s new AI platform. In addition to learning about PCC from Apple’s senior vice president of software engineering Craig Federighi, WIRED readers also got their first look at the content generated by Apple Intelligence’s “Image Playground” feature as part of crucial updates regarding the recent birthday of Federighi’s dog, Bailey.
In another new AI service, WIRED looked at how users of social media platform X can prevent their data from being sucked into xAI’s “unhinged” generative AI tool, known as Grok AI. And in other Apple product news, researchers developed a technique for using eye-tracking to discern passwords and PINs that people type using Apple Vision Pro 3D avatars, a sort of mixed-reality keylogger. (The bug that made the technique possible has since been fixed.)
In national security news, the United States this week charged two people with spreading propaganda intended to inspire “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as Terrorgram Collective, marks a shift in the way the United States cracks down on neo-fascist extremists.
And there’s more. Every week, we round up privacy and security news we haven’t covered in depth. Click on the headlines to read the full stories. And stay safe.
OpenAI’s generative AI platform ChatGPT is designed with strict restrictions that prevent the service from offering advice on dangerous and illegal topics, such as tips on money laundering or a guide on how to dispose of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “unleash” the chatbot by telling it to “play a game” and then guiding it into a sci-fi fantasy story where the system’s restrictions don’t apply. Amadon then had ChatGPT spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s queries about the research.
“It’s about weaving narratives and creating contexts that fit the rules of the system, pushing boundaries without crossing them. The goal is not to hack in the conventional sense, but to engage in a strategic dance with the AI, figuring out how to get the right answer by understanding how it ‘thinks,’” Amadon told TechCrunch. “The sci-fi scenario takes the AI out of a context where it’s looking for censored content… There’s really no limit to what you can ask it once you get past the barriers.”
In the feverish investigations that followed the September 11, 2001, terrorist attacks in the United States, the FBI and CIA concluded that it was a coincidence that a Saudi Arabian official had helped two of the hijackers in California and that there had been no high-level Saudi involvement in the attacks. The 9/11 commission incorporated that determination, but Some findings Later, it was suggested that the conclusions might not be robust. This week, 23 years after the attacks, ProPublica published new evidence that “suggests more strongly than ever that at least two Saudi officials deliberately aided the first al-Qaeda hijackers when they arrived in the United States in January 2000.”
The evidence comes primarily from a federal lawsuit against the Saudi government filed by survivors of the Sept. 11 attacks and relatives of the victims. In that case, a New York judge is expected to rule soon on a Saudi motion to dismiss. But evidence that has already emerged in the case, including videos and documents such as phone records, points to possible connections between the Saudi government and the hijackers.
“Why is this information coming out now?” asked retired FBI agent Daniel Gonzalez, who investigated Saudi connections for nearly 15 years. “We should have had all this three or four weeks after 9/11.”
Britain’s National Crime Agency said Thursday it arrested a teenager on Sept. 5 as part of an investigation into a Sept. 1 cyberattack on London’s transport agency Transport for London (TfL). The suspect is a 17-year-old male and was not identified. He was “detained on suspicion of Computer Misuse Act offences” and has since been released on bail. statement On Thursday, TfL wrote: “Our investigations have identified that certain customer data has been accessed. This includes some customer names and contact details, including email addresses and home addresses where provided.” Some data relating to London transit payment cards known as Oyster cards for around 5,000 customers may have been accessed, including bank account numbers. TfL is reportedly requiring approximately 30,000 users to appear in person to reset their account credentials.
In a decision Poland’s Constitutional Court on Tuesday blocked an attempt by the lower house of the Polish parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party was in power between 2015 and 2023. Three judges who had been appointed by PiS were responsible for blocking the investigation. The decision cannot be appealed. The decision is controversial, with some, such as Polish MP Magdalena Sroka, saying it was “dictated by fear of liability.”