The American cybersecurity company owned by Google, Mandiant, stated on Thursday that it has witnessed an increased use of artificial intelligence for conducting deceptive online information campaigns in recent years, while the use of the technology in other digital breaches has been limited until now.
Researchers at the Virginia-based company have found “numerous cases” since 2019 where content created by artificial intelligence, such as fabricated profile images, was used in politically motivated online influence campaigns.
The report mentioned that these campaigns included activities by groups aligned with the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador.
This comes amidst a recent surge in generative artificial intelligence models, such as the renowned chatbot (ChatGPT), which significantly facilitates the creation of convincing videos, images, texts, and fake code instructions, security officials have warned that cybercriminals might employ such models.
Researchers at Mandiant suggest that generative artificial intelligence will enable resource-constrained groups to produce high-quality content on a wide scale, Sandra Joyce, Vice President at Mandiant, stated that a Chinese state-linked media campaign called “Dragonbridge,” for instance, has greatly expanded across 30 social platforms and 10 different languages since it first targeted pro-democracy protesters in Hong Kong in 2019.
However, the impact of such campaigns has been limited. Joyce said, “In terms of effectiveness, there haven’t been a lot of victories, they really haven’t changed the course of the threat landscape yet.”
China has denied past U.S. accusations of involvement in such campaigns.
Mandiant, which assists both public and private institutions in responding to digital violations, mentioned that it has not yet observed cases where artificial intelligence played a significant role in threats from Russia, Iran, China, or North Korea.
Joyce stated, “So far, we haven’t seen a single incident where AI played a role, it hasn’t really been used in any practical usage beyond what traditional tools can do.”
However, she added, “We can be very confident this will be a problem that escalates over time.”