PANews reported on November 7th that, according to a recent report from Google Threat Intelligence Group (GTIG), the North Korean hacking group UNC1069 is using AI models (such as Gemini) to develop and deploy malware targeting cryptocurrency wallet and exchange employees. The report indicates that this malware dynamically generates or hides malicious code at runtime using Large Language Models (LLM), employing "on-the-fly code generation" technology to evade detection and enhance its attack capabilities.
Among them, malware families such as PROMPTSTEAL and PROMPTSTEAL show a trend of directly integrating AI into operations. For example, PROMPTSTEAL calls the Gemini API every hour to rewrite code, while PROMPTSTEAL uses the Qwen model to generate Windows commands.
UNC1069's activities included targeting wallet application data, accessing encrypted storage, and generating multilingual phishing scripts to steal digital assets. Google stated that it has disabled the relevant accounts and strengthened security measures for model access, including optimizing prompt filters and enhancing API monitoring.
The report warns of the potential risks of AI being misused in cyberattacks, especially given the growing threat in the cryptocurrency sector.
