Home » Technology » Artificial Intelligence » North Korean Hackers Exploit ChatGPT to Boost Phishing Attacks

North Korean Hackers Exploit ChatGPT to Boost Phishing Attacks

According to IT security researchers, a presumably state-controlled hacker group from North Korea has misused AI application Chatgpt in order to significantly increase their chances of success in phishing attacks.

Phishing with fake documents

The attackers are said to have used the service to create a fake South Korean military document. The aim was to make phishing attacks appear more credible, the news agency reported Bloomberg referring to the South Korean security company Geneans. With the help of the AI, the attackers would have created a deceptively real-looking copy of a military ID.

The picture was used in a phishing email that should move the recipient to open a harmful left. Instead of the announced graphic, however, the target website contained malware that could steal data from the devices of the victims. South Korean journalists, researchers and human rights activists who deal with North Korea were among others.

The Kimsuky group, which has been associated with North Korean cyber espionage by western and South Korean authorities for years, is to be behind the campaign. According to the US Department of Department, Kimsuky is pursuing global information missions on behalf of the regime in Pyongyang.

The new findings illustrate how North Korea Ki systems are increasingly integrating into its spy activities. It was only in August that the US company Anthropic reported that North Korean hackers had used the Ki-Tool Claude Code to build up fake identities and to sneak at large US technology groups.

Openai reacts

Openaai, operator of Chatgpt, had already announced at the beginning of the year that accounts with North Korean cover had been blocked. These were previously noticed by fake CVs, application letters and social media contributions that should serve to recruit helpers.

According to Geneans director Mun CHong-Hyun, the case shows that modern AI applications can not only be used for classic tasks such as programming or text production, but also to plan attacks, develop malware or appear in foreign roles-for example as supposed recruiting.

Leave a Reply