Share This Article
“Chameleon” malware: the beginning of a new era
In 2025, malware capable of modifying its structure in real time to evade antivirus systems has emerged. These “digital chameleons” represent a major evolution in the prospect of IT threats, foreshadowing a future in which traditional defence quickly becomes obsolete.
Artificial Intelligence at the service of criminals
According to the Google Threat Intelligence Group, cybercriminals are already exploiting advanced models — including Gemini, Copilot and ChatGPT — to create viruses capable of autonomously rewriting their own code.
This technique, known as “just-in-time self-modification,” allows malware to adapt at the last second, escaping signature-based security systems.
The role of state-backed groups in cyber operations
We are not just dealing with common criminals: players linked to North Korea, Iran, China and Russia are using AI to enhance reconnaissance, phishing, data theft and malware development.
In particular, North Korea leverages deepfakes and AI tools in the crypto ecosystem to distribute infected software and steal funds used to finance the regime.
The AI black market: subscribing to criminal tools
In 2025, the underground trade of AI-based tools has skyrocketed. Underground forums now offer structured services such as real SaaS platforms, including subscriptions, Discord support and free versions.
These tools enable the creation of deepfakes to bypass KYC checks, generate phishing campaigns, identify vulnerabilities and develop complex malware.
Advertisements immitate legitimate marketing, thereby normalising the sale of technology designed for criminal use.
The big tech response: strengthened defences
Google has implemented countermeasures by disabling malicious accounts, revoking access to Gemini’s APIs and strengthening the security of its AI models.
Collaboration with law enforcement and analysis of collected data aim to make the misuse of artificial intelligence increasingly difficult.
Evolving threats: why traditional antivirus tools are no longer enough
Although these malware variants are still rare and in an experimental phase, they represent the future of cybercrime.
Security systems based on known signature detection are less effective against dynamic, self-evolving viruses. Technology capable of analysing suspicious behaviour and adapting quickly are essential.
A battle between Artificial Intelligences
AI is no longer just a tool to enhance attacks: it is now an integral part of malware, capable of changing strategy in real time.
The growing accessibility of advanced tools allows even inexperienced criminals to carry out complex attacks, increasing risks for users and businesses, especially in the financial sector.
The need for global AI security standards
To counter increasingly sophisticated threats, internationally shared security standards are necessary. Criminals will use any model available; therefore, defence and regulations must be coordinated, global and powered by artificial intelligence itself.
