The rise of deepfakes has developed from a fringe technological curiosity to probably the most urgent cybersecurity issues heading into 2026, in line with new predictions from Kaspersky. As AI adoption accelerates throughout the Asia Pacific (APAC), the area is turning into each a proving floor for innovation and a testing enviornment for more and more subtle cyber threats.
With 78 per cent of execs in APAC utilizing AI a minimum of weekly, in contrast with 72 per cent globally, the dimensions and velocity of adoption are amplifying the dangers related to artificial content material, forcing companies and governments to rethink digital belief and resilience methods now. For enterprise homeowners and policymakers, this implies prioritising AI threat assessments and embedding deepfake consciousness into nationwide and company cybersecurity roadmaps.
Deepfakes are now not restricted to manipulated movies of public figures; they’re turning into a mainstream expertise encountered by staff, shoppers and organisations alike. Kaspersky notes that consciousness of deepfake dangers is rising, with corporations more and more coaching employees to recognise artificial content material and scale back the probability of fraud.
As deepfakes seem in additional codecs—video, photos, voice and textual content—they’re turning into a “steady aspect of the safety agenda,” requiring structured insurance policies fairly than advert hoc responses. Leaders ought to reply by formalising inner coaching programmes, updating incident response plans and mandating verification processes for delicate communications.
The risk is compounded by fast enhancements in deepfake high quality and accessibility. Whereas visible deepfakes are already extremely convincing, Kaspersky predicts main advances in lifelike audio, a key enabler of voice-based scams and impersonation fraud. On the similar time, the barrier to entry is falling sharply, with non-experts now in a position to generate mid-quality deepfakes in only a few clicks.
Additionally Learn: AI’s largest bottleneck isn’t intelligence however fragmentation: i10X co-founder
This democratisation of creation instruments means cybercriminals now not want superior abilities to launch convincing assaults at scale. To counter this, organisations ought to spend money on multi-factor authentication, out-of-band verification, and stricter approval workflows for monetary and executive-level requests.
Efforts to label AI-generated content material are anticipated to accentuate in 2026, however progress stays uneven. There may be nonetheless no unified or dependable system for figuring out artificial content material, and current labels might be simply eliminated or bypassed, notably in open-source environments. In consequence, Kaspersky anticipates new technical and regulatory initiatives geared toward addressing the problem, although enforcement will lag behind innovation. Policymakers ought to collaborate throughout borders to ascertain minimal requirements for AI content material labelling, whereas companies shouldn’t rely solely on labels and as a substitute undertake layered detection and verification controls.
Extra superior types of deepfakes, reminiscent of real-time face and voice swapping, will proceed to evolve, even when they continue to be instruments for technically expert attackers. Whereas widespread use is unlikely within the close to time period, Kaspersky warns that dangers will develop in focused eventualities, together with government fraud, espionage and political manipulation. Growing realism and the usage of digital cameras will make these assaults more durable to detect and extra persuasive. Excessive-risk organisations ought to conduct risk modelling for focused deepfake assaults and restrict the general public publicity of government audio and video information wherever potential.
The rising use of open-weight AI fashions can be blurring the road between professional and malicious functions. As these fashions method the capabilities of closed methods in cybersecurity-related duties, they provide extra alternatives for misuse because of weaker safeguards. On the similar time, AI-generated phishing emails, pretend web sites, and artificial model belongings have gotten more and more indistinguishable from professional content material, particularly as corporations themselves undertake AI of their advertising and communications. Companies should strengthen model safety, monitor for impersonation and educate clients on official communication channels to scale back fraud dangers.
“Attackers are utilizing it to automate assaults, exploit vulnerabilities, and create extremely convincing pretend content material,” stated Vladislav Tushkanov, analysis growth group supervisor at Kaspersky. “On the similar time, defenders are making use of AI to scan methods, detect threats, and make quicker, smarter selections.”
Additionally Learn: The ASEAN AI rush: Why “transfer quick and break issues” is a harmful technique for threat
For the APAC area, the stakes are notably excessive. “Asia Pacific is setting the worldwide tempo for AI adoption,” stated Adrian Hia, managing director for Asia Pacific at Kaspersky. “This momentum is creating large alternative, but additionally redefining how cyber threats emerge and scale.”
As deepfakes cement their place as a prime cybersecurity concern of 2026, resilience will rely upon preparation fairly than response.
Kaspersky recommends common information backups, remoted from networks, and the usage of superior safety platforms to detect and neutralise complicated threats. These steps, which policymakers and enterprise leaders alike should champion, are essential to safeguarding belief in an AI-driven financial system.
—
The lead picture on this article is AI-generated.
The submit Kaspersky: Deepfakes emerge as a prime cybersecurity concern for 2026 appeared first on e27.














