State-backed hackers from Russia, China, and Iran have been leveraging tools developed by Microsoft-backed OpenAI to enhance their cyber espionage capabilities.
Microsoft disclosed on Wednesday that hacking groups affiliated with entities such as Russian military intelligence, Iran's Revolutionary Guard, and Chinese and North Korean governments had been utilizing large language models, a form of artificial intelligence, to refine their hacking techniques. The models utilize extensive text data to generate responses that closely resemble human language.
In response to the findings, Microsoft has imposed a blanket ban on state-backed hacking groups from accessing its AI products, regardless of any legal or terms of service violations. According to Tom Burt, Microsoft's Vice President for Customer Security, the company aims to prevent threat actors from exploiting this technology for malicious purposes.
While Russian, North Korean, and Iranian diplomatic officials have not yet commented on the allegations, China's US embassy spokesperson Liu Pengyu rejected the accusations.
Microsoft further detailed various ways in which such hacking groups utilized large language models, including research on military technologies, spear-phishing campaigns, and crafting convincing emails to deceive targets.
Earlier this year, Microsoft warned that Russia, Iran, and China are likely to plan to influence the upcoming elections in the United States and other countries in 2024. Microsoft's Threat Analysis Center also confirmed that Iran has intensified its cyberattacks and influence operations since 2020.