Categories
Secondary Research unit 3

The Glaze project [University of Chicago]

The glaze project is a research project by lead Shawn Shan and Prof. Ben Zhou at the University of Chicago that lead to the development of tools including Glaze, Nightshade, Web Glaze and others. These tools were developed with the specific intent to prevent the appropriation of digital art taken for training generative AI models without the consent of Artists. They use adversarial perturbations, or small changes to input images in order to change the perception of the image by machine learning models. These take the form of tiny changes in specific pixels in order to confuse the training model using the image. It is currently open source and free to use by artists.

While the image is slightly altered (as demonstrated by the research team) the changes are negligible to an a average human viewer but vastly changes the perception of the image by AI models. The two most popular models currently available are Glaze and Nightshade. They have been named to communicate the purpose of each software.

Glaze, named after the glazing of pottery to protect the final project acts as something of a shield. It tricks some diffusion models into assuming that the style of the image if different from what it actually is. Some models might recognise glazed images as noise or an altered style. Since humans cannot see this difference, the model becomes trained to associate these altered styles as correct and creates increasingly chaotic outputs over time.

Nightshade, as the name suggests is advertised as a form of poison for model training. It changes the perception of a model to assume that an image with concept A actually contains a different concept B. Again, since the humans training the models cannot see these difference, it currents the training model to mix and confuse different concepts. As demonstrated by the researchers, as mode nightshade images are fed to a large model, the output may change the image of a dog into that of a cat, or that of a car into a kettle.

Since the development and popularisation of these models, companies developing AI training models have argued that they are a form of vandalism, however the researchers of the glaze project argue that these images would only make it into these system if they have been used by image creators to protect their work and their work has been used without consent.

For more details and to use these software, please check out their official website:

https://glaze.cs.uchicago.edu/index.html

A useful explanation that helped me understand the subject better came from a reddit post, linked below:

https://www.reddit.com/r/aiwars/s/gj6PPGlKbU

Leave a Reply

Your email address will not be published. Required fields are marked *