In this article:
How has AI far-right music spread in across the world and beyond?
In addition to being very savvy with their production of AI generated music, those on the far-right have also been extremely tactical in how they promote the music – from using social media platforms to using AI itself through manipulating algorithms and using bots.
Today, the internet has allowed for the advent of far-right music to be spread and shared worldwide. Imogen Richards, an expert in criminology and researcher for AVERT, explains that it is common for Australian Neo-Nazi activists to actively engage in international far-right campaigns as well as with Neo-Nazi affiliated artists, especially from Europe and the US.
“They’re really platformed – this is one of the ways in which the exposure to this type of material is enhanced”, Richards says. An example of the far-right’s worldwide proliferation is Mr Bond, an Austrian Neo-Nazi rapper known for his parodies of hits that celebrate Hitler-right ideas and call for “violence against minoritised groups,” she says.
“His music became particularly famous when it was set as a soundtrack to a livestreamed attack on a synagogue in Germany in October 2019 … [and] referred to [the far-right extremist] Australian who perpetrated the mass murder of 51 Muslims at Christchurch in March 2019 as a saint. He was reviewed in several police investigations to have explicitly incited followers to acts of political violence,” Richards explains.
She further points to how social media platforms aid in this ‘trans-nationalisation’ or spread of far-right music worldwide, noting that there is a trend of “anything generated more easily digitally, spreading on sites like Tumblr, Reddit, YouTube, Bitchute”.
Richards adds, “There was also a famous leak of a whole lot of white nationalist associated chat boards or discussion boards on [messaging sites like] Discord.
“This sort of proliferation of the different types of platforms means that we are likely to see an uptick in that – the ways in which they’re marketed and promoted virally through the algorithmic functioning of sites like Twitter/X”.
Fellow AVERT researcher, Helen Young, who specialises in examining far-right popular culture and white supremacy, agrees. “The far-right generally are very good at making the most of digital communication. So, they were early adopters of the internet, early adopters of pretty much every form of social media, email lists, servers, Twitter, Facebook, and all of those things,” says Young.
Another example is in Telegram where “groups like the National Socialist Network, engage a lot with Mr Bond,” she adds. These posts on the mainstream platforms will often link white supremacist audiences to more obscure sites hosted both on the surface and dark webs. For example, when Elon Musk took over X, he replatformed banned accounts like Andrew Anglin, who linked to dark websites such as the Daily Stormer (a violent neo-Nazi website filled with extreme and graphic content, which Anglin founded and edits).
In the AI context, these links have included ones directing people to Gab AI servers, a far-right social media platform that is universally available via the surface web.
According to Richards, “The AI that the Gab administrators have produced also has a facility for creating music… In Gab AI, you can produce vapour wave, which is basically [the far-right music genre] fashwave in this context, and via that AI tool you can also generate a rap song.”
Another AI music distribution trend that Richards has noticed is the “merging together of … different types of popular cultural trends and counter-cultural trends,” which she terms bricolage.
An example is the rise of video game media that are purposed for far-right indoctrination and recruitment, with music being integrated into them. Currently, a popular trend in this space are ‘Let’s Play’ videos, wherein a player will record themselves playing.
“It may be of a far-right game, or it may be a mainstream game, but the far-right producers will often overlay their gameplay … with commentary that is racist, reactionary and trolling. The nature of their engagement with their audience or co-players will reflect that as well.
“The increased use of, for example, AI-generated music in video games or particularly in ‘Let’s Play’ videos … helps to amplify that [music] and the platforms with which these types of media, ideologies and people are associated get exposure.
“We’ve now got videos on Odysee and Rumble, including fashwave videos, so I would expect that those same alt-tech platforms are probably likely to remain relevant,” Richards continues.
As such, she concludes that we are likely to see not just a growth of the discrete types of media, but the combination of different media forms, as a strategy to proliferate far-right music and appeal to wider audiences. Liam Gillespie is an expert in criminology, who has conducted research into the far-right’s use of sound. He adds that the proliferation also takes place in the physical world, “Artificial intelligence has enabled far-right and Neo-Nazi actors to rapidly produce and disseminate content… Where it was previously difficult to play a song ‘at’ someone, it is now much easier [to] target others via short memes/sound clips via online social media, or even in person, such as during rallies and marches, using portable announcement systems and other technologies.”
Using AI as an amplification tool itself
Another interesting way in which those on the far-right have spread their ideology and hate music is through AI itself, including via bots. According to Richards, a significant example of bot use can be seen within the disinformation and far-right riots that spread in reaction to the Southport stabbings –falsely blaming Muslim immigrants for the attack.
“[There was] associated AI-generated music and AI-generated images… I think Suno may have been used to generate the title called Southport Saga,” Richards says. As reported by The Guardian this song featured an AI-generated female voice singing lyrics like “hunt them down somehow”.
“That, of course, is important paired with the sort of AI assisted mass dissemination of disinformation, and of course the orchestrated coordinated bot campaign, which was promoting that material … using these tools to basically run large influence operations.”
Richards adds that it’s important to consider the wider context of the “tabloid or legacy news media” and the “anti-immigration, exclusionary rhetoric within the political establishment… which was providing the ideological context for which this AI-generated material and bot-driven misinformation and disinformation can thrive”.
Gillespie further agrees that, besides generating the music itself, AI has also helped facilitated the dissemination and amplification of this content on social media platforms, including on “4chan, 8chan, Discord, Telegram, Gab, Truth Social, X/Twitter, Reddit” among many others.
Akin to the algorithms on social media platforms, Young adds that the far-right is “good at using sales algorithms of big online platforms”, manipulating them to further the reach of their content. By viewing content that is unrelated to far-right materials, white supremacists have been able to direct “algorithms [so they] can take somebody from reading something that’s … a bit fringe to the white power movement,” she says.
“The recommendations algorithms can then lead readers or audiences down a pipeline towards quite extreme stuff within three or four clicks … That’s true, whether it’s a sales platform or whether it’s something like YouTube, which certainly has a lot of problems with the white power movement.”
Young continues, “Even if they’re not selling music through iTunes and it’s just on YouTube, the recommendations algorithms there would work in functionally the same way – ‘Oh, you’ve watched one white power music video, here are 20 more’ – and so it creates that kind of echo chamber effect very quickly. It’s not just an echo chamber resonating the same things, it’s a bit more like a black hole pulling people in and presenting this stuff as normal and, ‘Hey look, it’s everywhere. Look, it’s all over YouTube’,” Young says.
These AI algorithms are also present on Spotify – curating playlists and recommendations that promote certain songs, including far-right related materials.
The future of AI-generated far-right songs
With potent AI hate music having so successfully been created and distributed by the far-right, the need to find solutions for preventing such music from proliferating is growing.
Young says, “Far-right ideology in general is certainly becoming more widespread and visible in mainstream culture and society probably in the last five to 10 years. Partly, it’s kind of easy to point to Donald Trump and his first presidential campaign and all the things he’s done since then.”
However, “that’s more a symptom of a wider movement … taking up a set of beliefs that blame other people for problems. They blame people who they understand to be not like them for those problems, whether that’s Muslim people, whether that’s queer people, whether that’s Jewish people. All of those hate-filled beliefs and positions and the actions that go with them, they take up very deep social prejudices that have been around for a long time,” Young continues.
Gillespie agrees. “Undoubtedly, we will see an increased focus on racialisation. While this is always already part of reactionary thinking, it will become more prominent,” he says. In fact, Trump has already supported the deregulation of AI including music.
For prevention, Young suggests, “There’s a role that tech companies potentially have in knowing how violent extremists are using their sales algorithms and their platforms. They’re not obliged to carry the political messages of people who were trying to spread hatred.”
In the same vein, AI also has the potential to help remove racist, far-right music from social media platforms with various detection tools being created.
Richards says that often the architecture of social media platforms aids far-right activists, “as they’re open to the regulatory discretion of the people who own them”.
“Those people are not accountable to any kind of public oversight, they’re only accountable to shareholders,” she says.
“For example, with Twitter/X, there is deliberate deregulation and consequent amplification of certain types of violent material.”
This is because the way in which mainstream social media works, is that it monetises material that is likely to go viral, which often includes speech or incitements to violence.
“Particularly, it’s edgy, psychologically and emotionally addictive content for susceptible people, that can actually be what gets the most views and engagements,” Richards says.
Due to far-right materials constantly shifting from one place to another, she adds, preventing far-right music in such a way could prove difficult – because of what she calls the “peripatetic nature of online content”.
“Material would jump to places where there are less stringent regulations around hate speech,” Richards says. “[This means] the US largely, because of its 2nd Amendment protections for free speech, which means that more types of discriminatory material targeting minoritised groups and communities have been allowed to proliferate there.”
She further notes that several different codes and conventions like the Budapest Convention on Cybercrime, and others, especially within the European context, have tried to hold social media platforms accountable. “Because of the global nature of the information, it’s quite difficult, as the sites and the service can move from one place to another,” explains Richards.
As such, she suggests, “There should be large-scale, public information campaigns. Rather than just taking the approach of cracking down on the platforms and the administrators, it’s important to take a holistic approach as well.
“[We need to] educate people about the ways in which these platforms work, what they do and the ways in which they raise revenue through promoting this psychologically addictive material,” Richards continues.
She adds that, while AI is “a global vehicle for spreading mis- and disinformation and conspiracy theories”, the public should also learn that it cannot be separated from the historical ways that the far-right has operated.
“There is certainly a role for education and for people educating themselves, so that they actually recognise what they’re coming across when they find it and know how to counter those arguments,” agrees Young.
She further suggests, “Speaking out and saying that something is actually really completely unacceptable and that we’re not going to be part of it, we’re going to refuse to accept that, is one form of action, particularly in music and the arts”. As an example, she highlights how a band in Knotfest refused to play because some Neo-Nazi affiliated bands were spreading white power messages and hate towards minorities in the festival.
With technology continuing to advance and far-right ideas becoming increasingly popular, the future of AI-generated hate music seems only to grow, so more education is needed around the subject.
This is the second and final part of this article. Read part one.