Categories
Expert reflection Secondary Research unit 3

Further correspondence | Experts

As I share my intervention outcomes, my collaboration with Marta Abba saw an introduction from Italian AI artist Francesco D’Isa. His work explores AI data, errors and kitsch. Following is an except from my response to his email on the nature of misinformations and it’s association with vested interest and power:

As I discuss some of my findings on past images and AI, I’d like to put forward some examples I found on this subject, so as to build on the discussion-

Radio: 

The infamous 1938 incident of a halloween special on the book ‘War of the worlds’ preformed by Orson Welles created a mass panic in America. This was a radio show and the public had taken the portrayal of the fictional radio show to be an actual news broadcast. The public had not expected the radio, the main source of information at the time to be broadcasting a fictional show. Today, it is unlikely anyone would it so seriously. Despite many such shows being produced and broadcast, eg. Dragons : Fantasy made real (2004) on the Discovery channel and Doomsday 2012 (2007) on the History channel, it has not cause a similar reaction or panic.

Article documenting the 1938 incident:

https://www.smithsonianmag.com/history/infamous-war-worlds-radio-broadcast-was-magnificent-fluke-180955180/

VFX and CGI:

Visual effects in films have seen a steady increase in usage and technical expertise, but so has the discernment of the audience. Visual effects considered exceptional a few years ago, now are seen as cringe or clearly unbelievable. The exposure and increase of VFX usage has led to a more discerning audience that now distinguishes between good and bad visual effects.

On the flip side, we have CGI, or computer generated graphics. At one point, CGI struggled to fully generate believable worlds. There was a term that was described this challenge for a long time- the uncanny valley. As recently as 2019, we had debated on the subject with the lion king remake garnering much criticism for its depiction of photorealistic animals juxtaposed with human speech and mouth movement. But there have been examples of pushing past this with movies such as Alita: Battle Angel’s protagonist and Gollum from Peter Jackson’s Lord of the rings being complete CGI characters created through motion capture to great acclaim.

Photoshop:

I found many articles and disclaimers dated back to 2011 with a very similar tone (as being used for AI images today) towards photoshopped images. Allow me to attach two such below- one from the guardian on their policy and the second, a student project (by Stephanie Coffaney) at the California Polytechnic state University. This can be taken as evidence that this was a relevant and serious discussion around the late 2000’s and early 2010’s.

https://www.theguardian.com/commentisfree/2011/sep/04/picture-manipulation-news-imagery-photoshop

https://core.ac.uk/download/pdf/19153916.pdf

Specificity and Historical images:

Currently, AI struggles to generate believable outputs for individual people. My work on recreating past memories of artists (as a form of curating Identity) points to this shortcoming. Outside of famous celebrities and world leaders, it is very hard for AI to easily produce a specific person. The training data is also very limited for spaces and concepts that may be nuanced or regional to the individual. What is easy to produce for AI, are images of generic with well documented concepts. The details are what it really struggles with. Despite this, generative AI is producing photorealistic images of the past, and this is made easier due to the nature of old photographs. The images being in black and white, blurred in areas and having damage accumulated over time can make it incredibly difficult for a common person to distinguish them. Here, the nature of what is expected is being used in the favour of AI’s limitations. It would be a lot harder for AI to generate a believable image in colour with realistic detail from today’s era. But if asked to generate a picture in an old and damaged style of a past time, it plays to the strength of AI’s randomisation.

The problem with historical images is that they can also be difficult to fact check. Many stories and their related images are lost to time, or buried so deep in the archives that it would be a difficult and time consuming task to resurrect it. Many images have never been published and put away in boxes and corners, yet to discovered. Authenticating such images would be hard if only a digital copy is available. This make the possibility of historical images’ authentication lie in a grey area. Some AI generated images may be falsely flagged as real due to close similarities with other archived images, while some genuine images may be flagged as fake if there is no other evidence to corroborate its authenticity.

Refelctions of digital colourist, Marina Amaral on AI images:

https://marinaamaral.substack.com/p/ai-is-creating-fake-historical-photos#:~:text=And%20trust%20me%2C%20these%20generated,indistinguishable%20from%20the%20real%20ones.

Leave a Reply

Your email address will not be published. Required fields are marked *