Attackers are making their existing attacks even more sophisticated by using large language models such as ChatGPT.
The latest research from Microsoft and OpenAI reveals that cybercrime groups, internationally known state-sponsored hacker groups and other malicious actors are using large language models to develop AI-powered attacks. According to the report, Russian, North Korean, Iranian and Chinese-backed groups are using large language models such as ChatGPT to further sophisticate their existing attacks.
According to Microsoft’s blog post, a Russian military intelligence group called Strontium uses LLMs to understand technical topics such as satellite communications protocols and radar imaging technologies. At the same time, attackers used LLMs to automate or optimize key tasks during the Russian-Ukrainian war and the 2016 US presidential election.
Thallium, a North Korean hacker group, uses LLMs for fraud campaigns. Similarly, Curium, an Iranian group, uses LLMs to create scam emails and evade antivirus applications. Chinese-backed hackers are also using LLMs for research, scripting, translation and improving their existing tools.
Experts point out that AI-powered attacks could become even more prevalent in the future. For this reason, companies such as Microsoft are focusing on developing AI-based security solutions to prevent and detect AI-powered attacks. However, security experts emphasize that everyone should be aware and take security measures to protect against cyber threats.