AI scams surge: how consumers and businesses can stay safe (www.techradar.com)

🤖 AI Summary
A new wave of AI-powered scams is rapidly escalating in sophistication and reach: generative models are being used to create convincing fake customer-service numbers, websites, emails, chatbots, deepfake videos and voice clones that impersonate executives, support agents or loved ones. The World Economic Forum now ranks AI-generated misinformation as the top global risk for the next two years. Attack types include ClickFix attacks—where users are tricked into pasting malicious code into a browser or terminal (surging 517% between late 2024 and mid-2025 and now ~8% of blocked incidents per CyberPress.org)—malicious QR codes (1.7M unique detections last year), and classic phishing (an estimated 3.4 billion emails daily). Microsoft and other threat teams report thousands of daily attacks, and U.S. losses reached $16.6B in 2024. For the AI/ML community, this matters because the same generative and voice-synthesis tech powering innovation is lowering the barrier for highly personalized, automated social engineering at scale. Technically, attackers combine model-driven content generation with search/ad manipulation and simple user-action exploits (e.g., paste-to-execute) to bypass conventional defenses. Consequences go beyond financial loss to brand reputation, regulatory exposure and employee morale. Defenders should prioritize anomaly detection, model-aware threat hunting, user education (never paste unsolicited code, verify contacts independently), MFA, least-privilege access, and rapid incident response—while researchers accelerate tools to detect synthetic media and signal abuse patterns in real time.
Loading comments...
loading comments...