Artificial intelligence commonly referred to as AI has become a buzz word in everyday life nowadays. Loosely defined, AI is a branch of computer science that studies the theory and development of computers and machines capable of carrying out tasks that typically require human intelligence, such as decision-making and problem-solving.
AI has brought significant advancements across various sectors by increasing efficiency, reducing operational costs, and augmenting human capabilities. However, just as industries and professionals have embraced these technologies, malicious actors have also been quick to exploit them. Today, violent extremist groups are leveraging AI to automate and scale their activities, particularly in content creation and online recruitment.
This is evidenced by reports on the Qimam Electronic Foundation (QEF), an Islamic State (ISIS) media organization that published an English and Arabic language guide for using AI. The goal is to teach its recruits on how to spread propaganda and provide technical guidance to carry out violent attacks on their behalf. Generative AI tools models that can create original content in response to a prompt or request are exploited by terrorists to create multimedia content laced with radical narratives that are shared on social media platforms. In early 2024, Al-Qaida announced its intention to begin hosting workshops to train its members in using AI for propaganda creation, demonstrating how such groups are quick to adopt emerging technologies.
Outside of content creation, algorithmic content recommendation models on social media platforms present an especially difficult challenge to combating the spread of violent extremist ideologies. Many social media algorithms use several AI models, including Collaborative Filtering (CF) and predictive modelling, to present users with content based not only on their preferences but also on those of other users. Consequently, if extremist content is uploaded to social media, regardless of whether it was created with AI tools or not those who have previously interacted with such content are more likely to engage with it.
what to do.
In the wrong hands, AI technology is a potential threat to physical, political and cyber security. However, all is not lost. Governments working with AI technology companies can institute laws and policies that not only restrict access to powerful AI tools but also but also how they are used. This measure is already in practice with examples of ChatGPT moderated to decline prompts on topics such as bomb making. Continuous training and updates of AI models to program them to adhere to human values mitigates the risks of misuse. Robust testing of AI tools before market debuts will ensure developers seal loopholes or ‘jailbreaks’; a term referring to manipulative tactics used by some users to circumvent content restrictions.
Government legislation on its part provides legal guardrails and an avenue to hold offenders accountable for their actions. Legal requirements such as mandatory risk audits and minimum safety requirements for AI developers prioritizes public safety and accountability.
Notably, traditional indicators of extremist activity, such as affiliation with known terrorist entities, may no longer be sufficient when increasingly younger assailants can access ideological justification and propaganda with minimal limitations. Nevertheless, proactive safety engineering and continuous testing against terrorist and extremist prompts can help with real-time detection of suspicious or harmful outputs. As AI tools grow increasingly powerful and universal, their use by violent extremists poses increased risks and presents unique challenges to parents, governments, and tech platforms alike.
Lastly, to address these challenges, a multi-pronged approach is essential. Governments, tech companies, and civil society must work together to develop ethical frameworks for AI use, invest in AI-powered counter-radicalization tools, and increase digital literacy among the public. As AI continues to evolve, so too must our strategies for preventing its misuse.