Many artists fear that generative artificial intelligence will take over their passions and livelihoods. In response, a team of computer scientists from the University of Chicago created a program to protect them. Glaze “masks” images so that AI tools “incorrectly learn the unique characteristics that define an artist’s style, thereby thwarting subsequent efforts to generate artificial plagiarism.”
Artificial intelligence is here to stay, but we must ensure that it benefits humanity. Specifically, we need to ensure that artists and other creatives keep their jobs as AI adoption grows. Fortunately, this technology also holds the key to making this possible. Soon, Glaze and other research projects could offer AI protection to all artists.
This article explains how AI artwork protection from the University of Chicago works. Next, I’ll talk about a similar Google program called SynthID.
How does the Glaze anti-AI program work?
On February 14, 2023, UChicago News reported on Project Glaze. He says Neubauer computer science professors Ben Zhao and Heather Zheng created the program to advocate for artists on generative art platforms.
“Artists really need this tool; the emotional and financial impact of this technology on them is really quite real,” Zhao said. “We spoke to teachers who were seeing students drop out of classes because they thought there was no hope for the industry, and to professional artists who were seeing their style ripped off left and right.”
In 2020, their SAND (Security, Algorithms, Networking, and Data) lab developed similar software called Fawkes. It hid personal photos so facial recognition models couldn’t recognize them.
Fawkes became popular, receiving media coverage from the New York Times and other international media outlets. Therefore, the artists sent a message to SAND Labs, hoping that this program could do the same against generative AI.
You may also like: How to protect your data from Google AI
However, UChicago News said Fawkes was not enough to protect them. Fawkes slightly distorts facial features to deter recognition programs. However, other characteristics define an artist’s style.
For example, you might say that an artist created a painting based on color choices and brushstrokes. So, the researchers created an AI model to beat these platforms:
- SAND Labs has created style transfer algorithms, similar to generative AI artistic models.
- Then the researchers integrated them into Glaze.
- They masked an image with this software.
- The program uses style transfer algorithms to recreate this image in a specific theme, such as cubism or watercolor, without changing the content.
- Next, Glaze identifies features that have changed in the original photo.
- It distorts these features and sends them to the AI art generator.
- As a result, the AI model leaves little to no modifications, keeping the original intact.
How does Google SynthID work?
Photo credit: bnn.network
Several companies have developed ways to protect intellectual property. For example, Google announced SynthID, an invisible watermark for digital images.
Artists could take over their existing works to ensure their signatures remain despite AI adjustments. Additionally, the company claims that it is “detectable even when edited by common techniques such as cropping and applying filters.”
You may also like: Artists defend generative AI in open letter
Pushmeet Kohli, head of DeepMind research, told the BBC that the new system changes images so secretly “that for you and me, for a human, it doesn’t change.” He added: “You can change the color, you can change the contrast, you can even resize it (and DeepMind) will still be able to see that it’s AI generated.”
“With SynthID, users can add a watermark to their image, which is imperceptible to the human eye,” says a Google DeepMind demo video. Additionally, Google Cloud claims to be “the first cloud provider to offer a tool to responsibly create and confidently identify AI-generated images.”
Therefore, it plans to expand to other AI models. Wider adoption of this service will make it more effective. More importantly, it could help the company find bugs and improve the system faster.
Conclusion
University of Chicago Ben Zhao and Heather Zheng created an AI tool that could protect artists from generative AI. Glaze provides incorrect information to AI models to prevent it from tampering with artificial images.
Google has also created a similar tool called SynthID, ensuring that content retains the artist’s signature, regardless of editing techniques. However, both projects are still under development.
You can learn more about Glaze anti-AI software by reading this arXiv research paper. Plus, discover more digital tips and trends at Inquirer Tech.
Read next
To subscribe to APPLICANT MORE to access The Philippine Daily Inquirer and over 70 other titles, share up to 5 gadgets, listen to news, download as early as 4 a.m. and share articles on social media. Call 896 6000.
For comments, complaints or inquiries, Contact us.