Meta is the parent company of Facebook and Instagram. It has discovered multiple internet threat actors using the public’s interest in AI tools, such as OpenAI’s ChatGPT, to promote and distribute malware.
More than 10 different malware families and over 1,000 malicious links were discovered by Meta. These were being promoted as tools containing the infamous ChatGPT chatbot.
While some of the malicious tools failed to provide the same functionality. The others offered a similar experience to ChatGPT, thereby tricking users into thinking they were using a trusted product.
ChatGPT Versions
The functional malicious ChatGPT versions are reported to have left behind abusive files on user devices that can access sensitive data.
This tactic of leveraging user interest in an AI tool to promote malware is similar to previous attacks that leveraged user interest in cryptocurrencies to promote malicious crypto offers. This practice has cost some users their entire cryptocurrency wallets.
Guy Rosen, the Chief Information Security Officer at Meta, stated at a press briefing that “ChatGPT is the new crypto” for hackers. The security dangers associated with generative AI technologies were also discussed.
Rosen stated that Meta is already preparing its defenses for such technology. As it has the ability to create human-like writing, music, and art.
When asked if AI technology can be used to create disinformation campaigns. Rosen said that it is still too early for AI to be used in information operations. However, he expects “bad actors” to use generative AI tools to speed up or scale their operations.
Overall, Meta’s discovery of this widespread malware distribution campaign highlights the importance of cybersecurity awareness and the need to stay vigilant when downloading and using AI tools.
Fake ChatGPT Versions Created to Steal DataAs AI technology continues to develop. It will be essential to address and mitigate the security risks associated with its use.
Also Read: Why Segagte Fined $300 Million