Recently, reading the two-part article by Leo Chau on ArtsHub, I was intrigued by the lengths to which artificial intelligence (AI) tools such as image generators, audio generators and ChatGPT systems, are being utilised by fascist and far right movements globally. For the past two years, I have been undertaking the Master of Research program at the Institute for Sustainable Industries and Liveable Cities at Victoria University, in Melbourne.
My main research question has been interested in investigating how AI tools influence human behaviour.
During my study, I found that that the synthetic generated images of human groups and individuals, particularly those outside of the hetero-patriarchal-able-bodied realm, as well as those of non-European heritage, emerge full of stereotypes. In summary, women, people with disability, queer and non-white folk don’t fare well in image generative systems.
Alarmingly, and further to this, I found that the more biased, prejudiced and stereotypical the image was, the more the participants of my research responded with further preconceptions. Shockingly, the AI images influenced the users and, ultimately, the more the AI showed biases towards a group of people, the more the participants were inclined to display those biases themselves.
Throughout time, images have ended conflicts, changed political opinions and united people across the globe. Historically, images have had the power to influence human behaviour. Advertisers know this well and, cleverly, have used this on our televisions, in our cinema and, more recently, on our handheld devices.
When it comes to AI, most interrogations focus on the future. Where will AI take us? What will be the future of humanity alongside AI? These are some of the themes often investigated by academics and displayed in pop culture.
I have taken a different path – instead considering history and historical facts. How could AI shape history or, worse, is AI able to reshape our past?
When Carl L Becker wrote, in 1955, “History is a venerable branch of knowledge, and the writing of history is the art of long standing.” He could not imagine how artificial intelligence could fit into this equation. But in one thing he was correct: “history is the art of long standing.”
But the question is who will be long standing? Who will tell the history? As we increasingly rely on digital devices and services in all areas of human endeavour, what will happen if we leave history to AI machines?
How subscriptions services changed everything
There was a time when you paid for something, and it was yours. Simple. You walked into a store, handed over your money and left with something real and tangible. Something you could hold, stack and keep. Records, books, encyclopaedias, cassette tapes, CDs. VHS tapes lined up like trophies under the TV.
Now we don’t own anything. Music isn’t ours; it’s a playlist floating in some cloud. Movies don’t live on shelves; they stream from somewhere, ready to vanish if the payment bounces.
The earliest form of subscription goes back to the 19th century when milk delivery began to charge a weekly fee for services. In the way we know today, however, it didn’t begin until 1997 when Netflix started to ship cassettes and DVDs to its users.
What went away without anyone noticing was… ownership. We used to pay money and then the thing we purchased actually belonged to us.
Today, if a movie or song gets removed from all platforms there could literally be no official way to access it. It could just be erased from history.
An interesting case was in early 2024, during the Super Bowl, when Alicia Keys missed a note while performing ‘If I Ain’t Got You’.
By the next day, the official media had apparently fixed this up and, unless you had recorded it on your hard drive, you will never be able to access the missed note version of the performance. Thus, in this particular case, the official historical archive of this event is a lie. It’s an intentional misrepresentation and misinformation of what actually happened.
This may not seem to be a big deal, but what would happen if the records changed were not merely entertainment? What if this was about history, politics, culture?
What happens when someone, somewhere, for some reason, chooses to decide what’s real and what isn’t? Who controls the truth? Who owns reality?
We are not far away from this post-realism existence. We’re already seeing cracks. A version of events here, a rewrite there. And, suddenly, what you remember, what you lived through, doesn’t match what the world says transpired.
What will happen when the public record starts to feel more real than your own memories? What will happen if the past is no longer fixed, if it can be shaped, edited, updated, even deleted?
Artificial Intelligence erasure , the future of epistemology building and mass media records in the age of AI
As my research shows, if a synthetic image generated by AI can influence people, imagine what biased videos, songs and audio and misinformation could do?
‘AI erasure’ is a concept I coined during my exploration. It was developed through my observations and readings while working on my study. AI erasure highlights the dual risks posed by AI technologies in shaping and erasing human records.
What is, and will be, the impact of AI alterations on cultural memory, historical preservation of facts and collective human knowledge? The solution is not as simple as democratising information. Wikipedia, which began as a democratic exercise of the digitisation of records, is now full of biases and misinformation. We need to digitise records, but with a diversity of sources, ethics, regulation and rigour.
A few concerns can be raised in AI erasure. Anglophonisation of AI tools, for example.
The vast majority of large language models are biased towards English, which is the uppermost language used online today. This means, effectively, that AI tools may produce results that do not reflect the diverse lived experiences of individuals from different linguistic and cultural backgrounds. This is a form of AI erasure called ‘linguistic AI erasure’.*
There are other challenges; for example, the fact that AI enables virtually any individual to seamlessly alter, remove or manipulate images, voices, speeches, interviews and songs. Coupled with increasingly fractured and segmented societal structures, this capability has the potential to distort history, misrepresent identities, and silence marginalised voices or politically dissent thinking.
Another problem is that AI systems rely heavily on digitised data. Knowledge that remains undigitised and it is not part of any database, such as traditional recipes, plant-based wisdom, local geographical knowledge, oral storytelling, embodied understandings of pain, non-Western cures and traditional medicine, and weather patterns, as well as the lived experiences of Indigenous, quilombolas, small sects of religious groups, the disadvantaged and the voiceless, and disabled communities, can disappear.
Other possibility is that such knowledge, stories and wisdoms will become difficult to be found online. Consequently, it won’t be able to influence newer generations until, eventually, such knowledge systems themselves become extinct. AI erasure demonstrates how digital tools should be designed with care and respect to uphold the values, history and culture, as well as the autonomy of Indigenous or marginalised communities.
Finally, another approach to AI erasure is speculative and can only be achieved if institutional, governmental or corporate-level systems function in cooperation. For this to happen they would need to be operating at an industrial scale, as in the Alicia Keys Super Bowl example.
This dimension highlights the risk of AI technologies being utilised by autocrats, unscrupulous elites or corporations and large business for systematic alteration, omission, disappearance of records, narratives or voices within mass media archives.
AI erasure can include deliberate manipulation by authoritarian regimes to rewrite history or suppress voices. They can also deliberately embed societal biases, lies and mistruths within datasets or developed algorithms that are flawed, thus favouring dominant narratives. We may get automated tools that use editorial processes that prioritise profit or ideology over truth and representation.
In this way, AI erasure becomes, not just an unintended consequence of technological advancement, but a tool that can be weaponised to reshape historical and cultural memory.
These can lead to homogenised, sterilised representations of humanity that then can, therefore, erase its complexity, richness, diversity and beauty.
In 2023, The New York Times reported how Stephanie Dinkins, an artist who creates images of black women with AI – when debating why the word ‘slave’ could not be used by a generative tool – proposed a question similar to my concept of AI erasure: “What is this technology doing to history… You can see that someone is trying to correct for bias, yet at the same time that erases a piece of history.”
There is a real possibility that, when AI is integrated into societal frameworks, it lead to significant changes in how history is recorded and understood.
There is a line by Katherine Anne Porter in Ship of Fools where she says: “The past is never where you think you left it.” This never made more sense than now.
We ought to raise urgent questions about what we choose to remember, who is represented in digital archives, and how AI may reshape our understanding of history and identity.
* As far as I know, and to the best of my knowledge, the terms ‘AI erasure’ and ‘linguistic AI erasure’ were created and coined by me, Guido Oliveira Andrade de Melo.