Artificial Intelligence and Extremism: “Cybertruck Bomber”

January 13, 2025
Alexandra Kuzmina  —  CEP Intern

The case involving Matthew Livelsberger, a former U.S. Army Green Beret, who used a Tesla Cybertruck to execute an explosion outside the Trump International Hotel in Las Vegas, has brought broader public attention to the intersection of artificial intelligence (AI) and extremist activities. Livelsberger reportedly used generative AI tools, including ChatGPT, to aid in the planning of his actions. While this incident marks the first known criminal use of ChatGPT of such magnitude on US soil, the phenomenon of leveraging AI for malicious purposes is neither new nor entirely unexpected. 

The intersection of AI and extremism has been a growing concern for academics and policymakers for years, and has received even more attention with ChatGPT hitting the market just around 2 years ago. Researchers and practitioners have thus long predicted that AI would be weaponized to enhance extremist activities, from recruitment, to propaganda generation, to attack planning – however these predictions do not lessen the concern (or, in some contexts, do not even provide any reassurance), because the policing and governing bodies have evidently not sufficiently prepared to address it effectively. 

During the 2023 Dublin riots, far-right groups employed AI-generated imagery to glorify violence and amplify their narratives on social media. In another case, deepfake technology was weaponized to disseminate fabricated videos and audio of President Joe Biden discouraging voter participation. Meanwhile, in Gaza, Hamas and their supporters leveraged generative AI to produce propaganda and disinformation, to deceive naïve audiences, and create fabricated engagement online. Furthermore, rogue AI tools like WormGPT, FraudGPT, and DarkBART have emerged in the criminal underworld, enabling everything from phishing attacks to the creation of malware. These tools reduce the barriers to entry for individuals with malicious intent, making sophisticated cybercrimes accessible to those with minimal technical expertise.

Up until this so-called AI-driven optimization, we could rather plausibly assume that (at least some) violent extremists were put off by the amount of effort it would actually take to prepare and execute an attack. In other words, a kind of degree of laziness. And this is certainly not to say there aren’t enough violent incidents, but rather to observe that the utilised methodologies and their aspirational scale have not changed much – perhaps a lack of imagination, so to say. To illustrate this point, even the most influential terrorist attack, 9/11, was not original – hijacking planes had been a tactic employed in multiple instances before, most notably by the Popular Front for the Liberation of Palestine (PFLP) during the 1970s. By far, however, hijacking planes remains the most logistically complex example of extremist methodologies. The coordination, planning, and execution required for such an operation, as demonstrated in the 9/11 attacks, marked a peak in operational complexity – rather than a breakthrough in originality – but this is exactly why we do not see many more of such examples. 

AI, in this sense, is a game changer. Not only does ChatGPT (or the aforementioned darker versions) ultimately have “knowledge” of all possible methods and tactics, but it is also certainly very capable of adapting them to real life contexts, suggesting upgrades, assessing plausibility and giving step-by-step guidance that is more coherent, than anyone on a Dark web forum would. It also makes it possible to gather necessary information without raising red flags – in case of OpenAI/ChatGPT, policing authorities are not granted real-time monitoring nor have unrestricted access to user interactions. To this point, authorities do not have direct access to browsing data either, but it would be true to say that chatbots, like ChatGPT, are subject to a different legal framework than browsers altogether – and it has not been extensively tried in practice.

The “cybertruck bomber” case is interesting for a reason that is slightly different, however. Indeed, it stands as evidence to the use of AI in assisting extremist activities that cross into the physical realm, and certainly has (or should have) broader implications. And, yes, it is definitely demonstrative of the efficient use of AI capabilities in conducting research, which would have otherwise taken Livelsberger much longer, and raised some flags. But it is this fact of speed – from idea to execution – that should be most concerning. 

Historically, the logistical complexity of planning violent attacks almost acted as a natural barrier. The manual effort required – researching methods, sourcing materials, coordinating logistics – meant that the process took time, introducing critical moments where either the perpetrator could reconsider their actions (perhaps, a rather naïve idea), or authorities could detect and intervene in the preparatory stages. However, the use of AI has optimised these preparatory stages to such an extent that the window for both self-reflection and external intervention is rapidly closing. AI tools can now assist with everything from generating instructional materials and simulating attack scenarios to automating parts of communication and operational planning, leaving minimal time for either internal hesitation or external disruption.

And so, what was once a slow, deliberate chain of actions requiring sustained intent and commitment can now be compressed into a near-seamless flow of automated decision-making, without a doubt reducing the psychological distance between planning and violence. The danger, thus, lies not just in the sophistication of AI tools but in how they erode the organic friction that previously delayed or even prevented attacks from materializing – and this is, perhaps, the key conclusion to draw here. 

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive