Will AI take your job? An Examination about what historical class warfare teaches us about increases in productivity and how fascist ideology leverages artificial intelligence to extend its reach into the mainstream.
What first touched us as a lazy way to unlock our phones has reached into most corners of life. Technology that powers FaceID has evolved as a means to produce an abundance of cheap internet content. This technology, however, does not exist in a vacuum. They are tools - and, increasingly, weapons - used by the wealthy and influential to consolidate their power in society.
In addition, artificial intelligence (AI) has become a source of anxiety. There is fear of losing one's job, fears of deepfakes, fears of AI-generated content floods destroying the internet and killing arts, fears of a Tesla running over children on the way home from school.
These are not abstract fears. They are real. As always, the most vulnerable people are those that are hit hardest: precarious workers, racialized communities, the global poor, and the working class.
However, none of it is new. Every wave of technological progress in our capitalist society has followed the same pattern: increased productivity followed by greater exploitation, paired with repressions and attempts of control over the working class.
Not Every AI is the Same: Establishing a Common Understanding about AI
Currently, when people talk about AI, they are usually referring to two things:
1. "ChatBots", i.e. Large Language Models (LLM) like GPT or Gemini
2. Generative AI (genAI) like Google Veo, Midjourney, and Sora
Simplified, LLMs can be understood as a statistical autocorrect. A model predicts the next word based on the user's input and responds accordingly. GenAI is similar, but instead of words, it generates music, art, or video content.
However, AI is not limited to chat bots and image generators. Older forms, such as computer vision models, facial recognition, and algorithmic recommendation engines, still operate in the background. While these systems also rely on machine learning, genetic algorithms or neural networks, they are no longer marketed or perceived as "AI" in the same manner, i.e. such a exciting or frightening way.
In an article on slate, the President of the Signal foundation Meredith Whittaker succinctly describes AI as the following:
"What we’re calling machine learning or artificial intelligence is basically statistical systems that make predictions based on large amounts of data."
These systems are funded and used by the wealthy and powerful. The data used to train these systems reflects this, resulting in an extension of that power and the continued exploitation of the working class, the poor and marginalized.
On What Data AI Is Trained Matters. A Lot.
In the same article, Whittaker continues:
"We’re talking about data that was gathered through surveillance, or some variant of the surveillance business model, that is then used to train these systems, that are then being claimed to be intelligent, or capable of making significant decisions that shape our lives and opportunities - even though this data is often very flimsy."
Flimsy is an understatement here, even if we discard the ethics of sourcing the training illegally from unlicensed public or even private data for a moment. We must still consider the impact that training data has on the output of AI systems - or any big data systems for that matter.
The most obvious and frequently discussed point is that the training data is biased. Training data often reflects the same prejudices and biases related to race, gender, class, and so on, that permeate our societies. And this leads to bias in the AI itself, with only 12% of companies working on equity and fairness in their models.
For example, a 2018 study found that facial recognition algorithms from companies like IBM and Microsoft misidentified women with darker skin tones up to 34% more frequently than men with lighter skin tones. Just last year, a woman of color was banned from grocery shopping because facial recognition software incorrectly identified her as a criminal.
Another example shows that AI for skin cancer detection has been insufficiently trained on people of color, especially Black people, reflecting the fact that Black people who are diagnosed with melanoma are about five times as likely as White people to die within five years due to a lack of awareness about how melanoma typically presents on Black bodies.
Those are only two examples, however there are countless more, and an unknown amount of dark figures. These biases have concrete real-life impact.
In The German Ideology, Marx and Engels argue that in every era, the dominant thoughts are those of the ruling class. In other words, the class that holds power in society also holds its intellectual power. This means that AI models are trained with the ideas and thoughts of white Western societies, which have a well documented history of colonization, exploitation, racism, classism etc.
This also means that AI cannot produce original thoughts, it can just replace the commodification part of it. For example, it replaces an artists ability to exchange their skills for money but cannot replace the originality of their work or their thoughts.
AI being trained on the thoughts of the ruling class is exemplified by the fine-tuning of training data for image data sets, which relies on surveys of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) users. This practice creates feedback loops and echo chambers of biased worldviews.
Going even further, tech plutocrats of our time explicitly manipulate their AIs to fit their world views. Just six months after doing the Nazi salute himself, Elon Musk's grok, which as of writing this article is the most used LLM worldwide according to llm-stats.com, starts to praise Hitler and goes full Nazi. As usual in times of crisis, capitalism turns towards fascism.
The fascist ideology behind AI
If you now think, that this is just an isolated case of Elon going nuts, do not be mistaken. In reality, there are multiple deeper levels of ideology behind the current push towards AI systems. At 2025 re:publica, an annual German conference about the web 2.0, this was one of the focus topics. We will explore two of these levels by referencing according talks. Definitely check out the full versions.
The first talk by Rainer Mülhoff outlines how the behavior from Trump and Musk currently demonstrate an anti-democratic desire for political power within the tech industry.
Mülhoff notes that fascism has always aligned itself with modern technology companies. For example, the Nazis relied on IBM's newly developed punch card systems to administer and accelerate their industrialized program of mass extermination. He further argues that AI is a fundamentally human sorting technology, which aligns with central tenets of fascist ideology.
As previously mentioned, the current tech plutocrats believes that artificial intelligence will inevitably take over. This idea is rooted in transhumanism and has deep ties to natural selection, eugenics, and IQ fetishization.
According to Mülhoff, this is followed by a supposed ethical turn through two influential philosophies. Effective altruism, presents itself as a utilitarian framework that seeks to quantify the moral worth of human actions in terms of mathematical and economic cost-benefit analyses. Longtermism, in turn, holds that the paramount moral priority is to ensure that "Earth-originating intelligent life" realizes its potential on a cosmic scale.
In an interview, historian and journalist Dr. Émile P. Torres explains the danger of these philosophical frameworks. In short, they redirect moral and political attention away from urgent crises such as inequality or climate change. Instead, they provide ethical cover for morally dubious decisions in the present, justified in the name of preventing speculative "existential risks" - from dystopian, matrix-like apocalypses to utopian visions of "astronomical future value". These philosophies appeal especially to tech-based plutocrats like Elon Musk and Peter Thiel.
From there, Mülhoff argues, the vision becomes openly political: movements like "Neoreaktion NRx" and the "Dark Enlightenment" reject liberal democracy as an obstacle to progress and imagine authoritarian alternatives, such as CEO-states or tech monarchies, often blending with alt-right currents. These concepts have extended their reach into the white house trough JD Vance, or to the EU commission via the Future of Life Institute.
How Fascist World Views are Reproduced by Generative AI
A second dimension explored in a talk by Roland Meyer, professor for Digital Cultures and Arts, is how the aesthetics of Generative AI regurgitate these underlying fascist ideologies by abusing the biased training models.
According to Meyer, AI acts as an archive of past human artistic and literary creations. Therefore, it produces combinations of historical and expected patterns, which are, by the way, entirely class- and white-washed. This connection to the fascist fetishization of tradition and a better past is problematic.
This can be particularly observed in the AI slop that Donald Trump, Elon Musk, Javier Milei, AfD, etc. use to propagandize the "unspoken truth" or themselves as heroes and saviors. The culmination thereof is the infamous Gaza video, that perverted the ethnic cleansing and genocide as a happy investment for luxury holiday resorts. Other examples include the depiction of Kamala Harris in a communist uniform or racist propaganda in election campaigns.
The aim of these people is not to misinform the public, but to visualize an image of perceived truths and objectivity, which in reality are just reproduced stereotypes.
The image worlds generated by AI in digital fascism are not primarily about disinformation, but above all about managing emotions and producing right-wing “emotional communities” - (Simon Strick)
AI is not only a tool, but it is a political project based on mass abuse, exploitation, theft and appropriation. In its center: the interests of capitalists with fascist worldviews.
How AI is Used to Exploit the Working Class
The obvious exploitation is that through illegally obtained training data, as already touched upon earlier. Most AI models have been trained with unlicensed copyrighted data, like openAI's image generator that has been trained to emulate Studio Ghibli's style. In contrast, even Adobe failed trying to "cleanly" train their AI models only on their own licensed stock images. However, there is a more concerning dimension to licensing problems.
Where previously, a single student had to over 600.000$ for hosting songs on Napster, AI companies can now fleece artists and authors against their will, exemplified by Meta supposedly training its models through Library Genesis, a shadow library that enables free access to content that is otherwise paywalled or undigitized. Meta justified their actions by arguing that copyrighted books on their own serve no economic value. When confronted with that kind of exploitation, Sam Altman, the CEO of OpenAI, arrogantly replied to the applauding audience: "You can clap about that all you want. Enjoy."
This turnaround demonstrates that under capitalism, the law serves and protects capital, not people. Shortly after taking office, Trump fired the head of the US copyright office, Shira Perlmutter, who coincidentally released a report examining whether companies can use copyrighted materials to train their AI systems.
A short AI conversation consumes up to 0.5L of water and produces up to 5 grams of CO2. Elon Musk goes beyond that and powers data centers with methane gas generators, exacerbating the environmental impact even further. Moreover, data centers are often built in low-income, minority-led communities, not only resulting in hazardous air and water pollution, but also in direct health consequences for these communities like a four times increased cancer rate.
Furthermore, the production of specialized hardware, such as GPUs and chips, depends on mining rare earth metals and other finite resources. These resources are often mined under brutal conditions involving child labor, unsafe practices, environmental devastation, and colonial structures. This dynamic perpetuates racialized inequities, dispossessing marginalized communities of land and health while allowing the western bourgeoisie to reap the benefits.
More information about exploitation of sustainability in all its three dimensions - ecological, social and economical - is given by Mülhoff in a talk during 37C3
A second way of exploitation is the exploitation that comes with the use of AI as a tool on the job. Historically, automation has cost jobs, created jobs and displaced jobs. What it has always done however, it has increased the productivity of the worker and while doing so, capitalism has managed to turn technological progress into exploitation.
In a logically functioning society, technology that automates work would be considered a godsend. Society could produce the same amount of goods while requiring less human time. As Joanna Maciejewska put it:
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
In a capitalist society, however, this would either mean more exploitation, or people being fired instead of everyone working fewer hours, whether it be wage labor or care work. This effect is called decoupling of wages from productivity. Karl Marx sums it up well:
Like every other increase in the productiveness of labor, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the laborer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.
A recent study by David Marguerit shows that "[...] automation AI negatively impacts new work, employment, and wages in low-skilled occupations, while augmentation AI fosters the emergence of new work and raises wages for high-skilled occupations. These results suggests that AI may contribute to rising wage inequality."
It also must be mentioned, that the current abilities of AI are still over-hyped and over-stated when it comes to the improvement of productivity in highly skilled labor. While specifically entry-level jobs are endangered by AI, more skilled jobs do not yet gain the same productivity boosts. For example, senior software developers take 19% longer when using AI than without. This is backed by Marguerit's study where he suggests that "the current capabilities of AI algorithms may not yet be advanced enough to entirely replace the content of occupations". This is further underlined by a study from Zhou et. al. which finds that advanced models "tend to give an apparently sensible yet wrong answer much more often, including errors on difficult questions that human supervisors frequently overlook."
The third kind of exploitation aims at solving this "issue" by deskilling of labor. Deskilling is the process of lowering the amount of skill, as well as the socially necessary labor time required for producing goods. Another study found that “reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone.” When high-skill labor becomes low-skill, AI also removes the ability of the skilled to access wealth, while allowing wealth access to skill.
Moreover, this also means market dynamics are shifted towards the capital. Currently, highly skilled individuals hold power in negotiations because their skills are in high demand and short supply. However, if AI makes these skills widely available, employers will be able to lower wages because workers will be more easily replaceable, all while low skilled jobs will be replaced entirely.
Thus, the deskilling brought by AI is not merely a matter of efficiency but also a restructuring of social relations. Capital appropriates skill itself, transforming it into private property while rendering labor more replaceable, powerless, and estranged. This demonstrates that AI functions more as a mechanism for deepening capitalist domination over labor than as a neutral tool.
AI and the Alienation of Labor
AI also has more impact on the working class that is not direct exploitation: the emphasized alienation of the worker from the labor. Marx described three main forms of alienation: from the product of labor, from the labor process, and from one’s species-being.
Workers become alienated from their work product when their contributions are overshadowed by AI generated content. This leaves them unable to see their own labor in the final result.
They also become alienated from the labor process when their activity is reduced to AI prompt engineering, which strips away their autonomy and creativity.
Finally, they become alienated from their species-being because skill and purposeful activity are central to human thriving. Once these capacities are absorbed by AI, workers are denied the opportunity to express their humanity through labor.
How do we react?
AI is not inherently bad; it's the system in which it exists that abuses it for profit, competition, antagonistic class relations, and imperialist power interests. End users like you and me are not responsible. In a different society, AI would help solve societal problems. It would allow us to produce the goods necessary for our well-being in less time, providing us with more freedom to express ourselves and do what we love, like art, sports, and connecting with others.
Regulation will not solve this problem. The ruling class will not change the system that benefits them and is fed by their thoughts and ideals. We, the masses of working people, have the power to reclaim our freedom. We must unite and take control of AI and the companies that control it.
The main questions are: Who develops AI? Who owns it? Who uses it? Who owns the companies where AI is developed and used? Who is in charge there? Who profits from it? The answer should be: We.