Fascism and Ai

Fascism sucks

I would have guessed that would have been a pretty obvious and self-evident opinion to hold. Not so. Not anymore.


I normally don’t write about things like these online, and I have been thinking a lot about my place in the changing world we are living in today, because, you know, House music is my thing. It is music that is supposed to be danced to, either as a form of escapism or celebration, or both. And it is sad to me because I want the world to be better. And I also want the world to dance!


I’ve been watching the rise of fascism for the past decade or so, not because that’s when it started to rise. No. But because that’s when its rise became increasingly clear to anyone paying attention. Since, I’ve been keeping an eye on its current developments, while learning about its evolution in the decades prior.

I’ve seen it infiltrate various internet cultures and trends, mainly aimed at young men. I’ve seen it weaponise the internet as a tool to spread its tentacles worldwide. From video-game culture, to dating and self help advice, to wellness… The list goes on.

Even on the musician forums I frequent you’ll find them, always under false pretense, abiding by the site’s rules, knowing full well that they can exist comfortably under plausible deniability. They became experts at wearing their “normal people” cloaks, because they know that as long as you don’t express yourself directly, being loud and clear about what you mean, it goes under the radar of the moderation teams. Nobody can accuse you of anything this way, you’re “just asking questions”, right? “Freedom of speech”, right? Fascists love abusing the rules without breaking them. At least until they get in power.


Speaking of things fascists love: Ai!

Ai (stupid name) is everywhere today. It felt like nobody would shut the fuck up about it for a while? And you would be forgiven for buying into the constant hype. Drilling our brains with non-stop propaganda about it.

First things first: artificial general intelligence (AGi) People like Sam Altman believe they’re working towards AGi, and that somehow, it’ll become aware and gain consciousness. Like Pinocchio, right? Except with disgustingly large amount of stolen data, no body (wooden or otherwise), and lies way more than the beloved character ever did.

Not going to go deep into that but their lore is pretty interesting, though, if you like learning about cults, look up the rationalist subculture (not to be confused with rationalism, the philosophy) which came out of a website (lesswrong.com where a bunch of kids who consumed way too much science fiction gathered. These people are today at the helm of the ai world :facepalm: here’s a short, funny episode that gives an overview of the cult. Here’s a long, also funny episode series that touches tangentially on their cult, but goes deeper on a sub cult that branches out of it.

If you still believe lying chatbots are going to suddenly become aware and become a super intelligence… (if you have heard and believe the heads of Anthropic or of Open Ai or a number of fear mongers in the media talk, you’ve heard the stories of Ai takeover etc.) I am sorry. I believe intelligence, agency and awareness can only exist in living organisms. I believe it is a result of evolution by natural selection. Don’t know which school of philosophical thought that fits in, but yeah. I don’t think a bunch of nerds are ever going to be able to create life. :shrugs:

For the rest of us, we’ve had access to the LLMs for a few years now. I personally don’t really get any use out of them. I have tried them, most of the time they have been completely useless to me, few times they have been of any use, like for finding out how to do a specific thing on a software I am not well versed in or things like that, it’s hit or miss… I don’t think it’s worth the cost, though, and I rather spend a bit longer, do a little bit of effort and google things myself instead. It’s way more reliable. Also I see no point in generating images. Have tried it, it’s lame. Doesn’t mean that there are no legitimate uses in LLMs, but those uses are so limited and at the end of the day not even worth the true cost. And let’s not forget they trained those models on stolen work.


A few years back, when they first came out with the chatbots, the song you heard everywhere was “they’re going to get so much better”. They have not. Not in any significant way. They used to fail at logic problems, or counting letters and basic shit like that and they subsequently trained their models to be better at solving those problems. In other words, just bug fixes. But, the underlying issue with the way LLMs function is they are probabilistic in nature. The “hallucinations” are not a bug that can be fixed because the hallucinations are how they work.

Hey, that’s another thing they lied about. It’s almost as if the whole thing is based on lies 😉

Ai and music

Years ago, when Machine Learning (ML) based models that could generate audio data hit the scene, experimental musicians were the first to jump on them to create, in what must have felt as “an exciting new frontier of technology in music”. Fully automated artist projects have existed long before Suno was a thing. Today, anyone can prompt a song into existence. Much like with LLMs though, I wonder, what is even the point?

Not long ago someone on a forum posted something that has been lingering in my mind since. They spoke about something called “Attribution Drift” in regards to a thing that had happened a while back (before Suno was a thing we all knew about) where it came to our attention that someone was using the jams created by the forum members, posting them on their socials, pretending it was their own creation, and as far as the forum user explained it: attribution drift is a psychological phenomena that explains how people start to believe they’ve made something which they have not made themselves. They start to truly feel like it was actually made by them. Fascinating! This was within a larger conversation regarding Ai generated music. Some users were arguing that if they trained their own models on their own music, that would somehow be more artistic than using a model trained by other people. Someone then asked something along the lines of: what is stopping anyone from feeding an algorithm the entire catalogues of their favourite artists and then releasing that, claiming the output as their own?

It makes you wonder if the prompt creators actually believe they’ve made the music!

Needless to say I find the whole Ai music thing to be preposterous. What is the point of being a musician if it is not the act of creating music yourself? Playing it or programming it, deciding what plays and when it plays and when it does not…


For the uninitiated, there are a few ways in which generativeAi (genAi) is used in the music creation process.

The first, and most disgusting, is something like what Suno or Udio does, which is you write a prompt and a full piece comes out, with full lyrics and all, much like entering a prompt on an LLM to write your school homework. To make matters worse, Suno’s models (and similar tools as well) have been illegally trained, stealing from humans and their work and talents to create this product that I am not even sure why it even exists. Who wants that? People who generate music with it cannot even copyright what it spits out. There is just no point to any of it, right?

The second layer (no pun intended), is what tools like Layers by LANDR offers: you have a song you are working on, and you upload it to their app, it analyses the song and you then can select a portion of the timeline and specify what kind of element you would like to generate for that section. Say you wanted a horn section for the chorus, then a horn section that is supposed to fit with your song comes out and you can then put that “layer” as part of your song. I have seen demos they have on their site and on YouTube, it does not sound good, the resulting layers clash with the song, it is a mess. though kudos for being ethically trained. Still, it probably costs a lot of money for them to generate a layer, and given the results are not good, you will probably have to spend a lot of tokens to get a single thing that is passable. Not worth it in my opinion.

A third way it can be used is using a tool like ACE Studio, which synthesises vocals based on the input from the musician (like playing keys on a piano connected to the computer) the program turns the note information into a voice and the user enters their lyrics and are able to edit the performance. A vocal take results from this process, and it contains the melody and the lyrics the artist entered themselves and the properties of the vocals and the melody, rhythm can all be manually modified. Additionally their models have been ethically trained, there’s no stolen labour. Of the three genAi methods discussed here, this is the one I see as potentially useful and least offensive. It can run locally as per the latest version (if your device is powerful enough for it) and it requires a lot of work to get things right because unlike the others, it is closer to a synth than to an LLM where you enter prompts. (though on their latest version they are also offering something similar to LANDR Layers, which kind of sucks, but you can use the program without using that feature)

There are also tools that use ML in the mixing and mastering process like those by iZotope which are a staple in the industry already, I use RX daily in post production. Most people (me included) use stem separation (also locally) in one way or another.

So we can see there are various degrees of Ai involved depending on the kind used. And some of them are more costly in terms of processing power and compute. The tools that work locally on your computer will use a lot less than those that are hosted on servers on some data center somewhere, burning oil and water with every prompt.


On the bright side, Bandcamp announced it will not allow Ai generated music to be sold on its site. I think this is a good thing. Today RA PRO put out a piece highlighting the arguments of some avant garde artists who have trained their own models with their own music and are understandably worried their music will not be able to be sold on Bandcamp. We still do not know how Bandcamp is planning on enforcing the ruling, and while I am generally for the ban, I do worry about artists like them who are real artists and not slop generators trying to make a quick buck and how this will impact them. (But, I am sure that they, having reached the point they have in their careers, where they have a significant body of work, enough to train a ML model that can output consistently useable material, will certainly have enough of a pull to get their followers to purchase the material directly from them or any other platform. Not every musician has the luxury of building something as impressive as they have, so they can do without bandcamp, I guess? No loss there for any of the parties involved 🙂 )

With the exception of ACE Studio’s vocal synth engine, genAi offerings are more of a gamble than a tool. It’s like the slot machines where you pull the lever hoping something that is useful will come out? But it seldom does spit out something useful? Let alone the fact those things can never create anything new or groundbreaking due to their nature. There is no flexibility to mold things to your liking. It sucks.

Ai sucks.


Which brings us back to fascism.

Fascism is a nebulous ideology. It does not like being pinpointed. It likes being hard to discern. Hiding behind other ideologies, waiting for their time to attack. And like with polar bears, by the time you realize their presence, it’s too late.

It is here now. It is in power. And it is no longer afraid to hide its presence.

The killings of Renee Good and Alex Pretti, and the many others killed by the American Gestapo, ICE, this year alone, are a prime example of this. The administration is fully supporting this and they are trying to bend the narrative to fit their justifications. They are gaslighting the entire world with their blatant lying. Trying to portray the victims as terrorists or agitators that deserved to be killed.

Why do they like Ai? The lying fascists love the lying machine because it helps them lie more effectively. The fascist is concerned with appearances. After the ICE agent murdered Renee, the modified Ai image of the car where it falsely appears to be driving towards the ICE agent gave the rounds. “She wanted to run him over, you see”, “She is a terrorist”. When they arrested lawyer Nekima Levy Armstrong for filming, as a member of the press, a protest taking place at a church where the pastor is involved with ICE, the White House published an Ai altered image of her arrest where she is crying her guts out, when in reality Nekima was totally dignified as per the video of her arrest.

Fascists love portraying themselves as strong, special, compostured, worthy; and their enemies as emotional, weak, worthless. Somehow in their mind they believe that one group of people have more value than the other(s), and by dehumanising the other, whatever violence they inflict upon them, is, in their minds, justified. “but they are weak, you see, us killing them is actually bAseD“. Perhaps it is a way they get to rationalize their horrendous worldview to be able to sleep at night. Who knows. But it is no wonder Nazi losers like Musk love Ai. Hey, it tells them they are awesome, and on top of that it even generates their preferred type of porn! -Chef’s kiss.

Alright, but that’s only for the US, right? That does not affect us in other countries, right?

The fascists have alliances all over the world, and you know, they are looking at the US and are taking notes. Have you heard the phrase that goes something like “when the US sneezes, the rest of the world catches a cold”?

Yes, fascism has been surging all over the world. There is a very broad network of people on the ground and media spreading hate towards immigrants in Europe for decades. Even in nicher communities they are working on spreading their message of hate, radicalising young people everywhere. Far right parties affiliated with Trump are among the biggest political forces or are currently governing in various countries. Israel, The UK, Hungary, Italy, Argentina… The youths in most western countries are in support of their local far right parties. Social media feeds and the chatbots are in control of fascists or fascist adjacent pieces of shit. They are in your eyes, in your ears, shaping your opinions, shaping the opinions of the youth and how they see the world…

Fascism is exploiting the failures of decades of neo-classical economics. Privatisation, deindustrialisation and alienation have left the west in shambles and the youth see no future ahead of them. It is a recipe for disaster that fascists use to their advantage. Since the fascists are not bound by truth or ethics, they just lie about the reasons life has gotten worse and worse, blaming it on trans people, the foreigners, the non-whites, the Muslims, the homosexuals, women. And there is no real counter messaging to that coming from liberals anywhere (not that that is in the least shocking), don’t forget that, when push came to shove, the liberals sided with the Nazis against the socialists and the communists and the trade unionists. And with the lying machines they are at a considerable advantage. They are running with it!

Here is to the Ai companies going bankrupt sooner rather than later🍷