Microsoft has reported detecting hackers from China, Russia, and Iran utilizing their AI technology.
According to a report released on Wednesday, state-affiliated hackers from Russia, China, and Iran are utilizing resources from Microsoft-supported OpenAI to improve their abilities and deceive their objectives.
According to its report, Microsoft has identified and monitored hacking groups related to Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea as they attempted to improve their hacking efforts using extensive language models. These programs, also known as artificial intelligence, utilize vast quantities of text to produce responses that sound human-like.
The company revealed the discovery while also implementing a universal prohibition on the use of its AI products by government-sponsored hacking organizations.
“In addition to any potential legal or terms of service breaches, our main concern is preventing the use of our technology by known threat actors whom we have identified and closely monitor,” stated Tom Burt, Vice President for Customer Security at Microsoft, in an interview with Reuters prior to the release of the report.
Messages seeking comment on the allegations were not immediately returned by diplomatic officials from Russia, North Korea, and Iran.
Liu Pengyu, a spokesperson for China’s U.S. embassy, expressed opposition to baseless attacks and allegations against China. They also advocated for the responsible and manageable use of AI technology in order to improve the overall welfare of humanity.
The claim that government-supported hackers have been discovered utilizing artificial intelligence (AI) tools to enhance their surveillance abilities is likely to emphasize worries about the rapid spread of this technology and its potential for misuse. High-level cybersecurity authorities in Western countries have been cautioning since last year that unauthorized individuals were misusing these tools, although there has been little concrete evidence until now.
Bob Rotsted, the leader of cybersecurity threat intelligence at OpenAI, stated that this is potentially the first instance of an AI company publicly disclosing the use of AI technologies by cybersecurity threat actors.
OpenAI and Microsoft stated that the hackers utilized their AI tools in the initial phases and made gradual progress. Burt noted that neither company has observed any major advancements by cyber spies.
He stated that we observed them utilizing this technology just like any other user.
The report outlined the various ways in which hacking groups utilize large language models.
According to Microsoft, individuals believed to be associated with a Russian military intelligence agency, commonly referred to as the GRU, utilized the models to investigate “different satellite and radar systems that could be relevant to traditional military activities in Ukraine.”
According to Microsoft, North Korean hackers utilized the models to create material that could potentially be used in targeted phishing attacks against local specialists. Additionally, Iranian hackers relied on the models to compose more compelling emails, and even attempted to entice “prominent feminists” with a deceptive website.
The large language models, being experimented with by Chinese state-backed hackers, were mentioned by the software company. These models are used to inquire about rival intelligence agencies, cybersecurity concerns, and “notable individuals.”
Both Burt and Rotsted refused to disclose the amount of activity or number of accounts that were suspended. Burt supported the strict ban on hacking groups, but clarified that it does not apply to Microsoft services like Bing, citing the newness of AI technology and the apprehension surrounding its use.
He stated that this technology is not only novel, but also extremely influential.
Source: voanews.com