Grant's Story: $70,000, Five Months, and a Face That Never Existed
A Chicago-area man, who asked to only be identified as Grant Trust, met a woman named 'Kay' on Facebook Dating in 2024. What followed felt real in every way that matters. They talked every day for months. They FaceTimed almost daily. She had a face, a voice, a personality. She remembered things he'd told her weeks ago. She asked about his mother. She knew when he was stressed. He fell in love. The connection cost him $70,000, destroyed his ability to trust, and left him paying off debt for someone who never existed. 'Same face, different name, still active' — that was the report he filed with authorities after learning Kay was an AI-powered scam profile. Days after he reported her, she was still on the platform, still operating. Grant is in therapy now. He hasn't told anyone in his life what happened.
Grant's loss is not unusual. According to the FBI's Internet Crime Complaint Center, Americans reported losing more than $672 million to romance and confidence scams in 2024 alone, and LexisNexis Risk Solutions, which works with public agencies to combat digital fraud, estimates total romance scam losses in 2025 reached $3 billion — up from $1.2 billion in 2024 and $250 million in 2023. That 12x increase in two years is the clearest measure of what AI has done to an already dangerous crime.
What Changed: AI Turned a Slow Crime Into a Scale Operation
Traditional catfishing was labor-intensive. A scammer could run a few relationships at once, risking exposure every time they failed to match a photo to a video or slipped in their fake backstory. AI has industrialized that constraint away. Criminal organizations now use large language model-assisted scripts and automated sentiment analysis to adjust their tone based on how a victim responds. Translation tools enable cross-border targeting in any language. And most critically: the FBI's Norfolk Field Office confirmed in February 2026 that criminals now use AI-generated voice messages to make schemes more believable, with real-time deepfake video capability allowing scammers to appear on live video calls using completely fabricated faces.
The Connecticut woman who lost nearly $1 million maintained a months-long relationship with someone who appeared on video calls — in real time — using a deepfake face. The 'gold standard' test for verifying an online match, the custom photo request (send me a selfie holding today's newspaper), is dead. In 2026, AI can generate that photo in seconds. Norton blocked more than 17 million dating scam attacks in just the last three months of 2025, an increase of 19% from the prior year. Despite this scale, 55% of romance scam victims never report the crime. Shame is the primary reason. Only 22% contact the FBI. The real loss numbers are almost certainly larger than any estimate.
Also on LumiChats
How AI Romance Scams Work: The Exact Playbook
- Identity construction: Scammers use AI image generators to create realistic profile photos with consistent faces across multiple angles and settings — something photo-theft catfishing could never do. The face is too perfect, too consistent, and traceable to no real person because they do not exist. Criminal organizations run these as businesses, with staff dedicated to different phases of the relationship cycle.
- Conversation at scale: An operation that previously required one human managing five relationships can now run hundreds simultaneously using LLM-assisted chat scripts that track each victim's details, adjust tone to match emotional cues, and remember everything said across months of conversation. The model learns from the victim's responses and gets better at sounding like their ideal partner.
- The money request pivot: After weeks or months of genuine emotional investment, the financial ask is introduced gradually — first small amounts with compelling explanations (medical emergency, customs fee on a package, investment opportunity only available for 72 hours), then progressively larger. By the time victims recognize the pattern, the emotional connection has overridden rational judgment. Confirmation bias, as former FBI undercover operative Eric O'Neill explains it: 'When we see something we truly want to be true, we will confirm it is true for ourselves.'
- Voice and video escalation: Victims who push for phone calls or video get exactly that. Voice cloning tools create a convincing voice from public social media audio in as little as three seconds. Real-time deepfake video tools generate live video calls with a fabricated face. The Connecticut victim spoke with 'him' on video calls for months. The face was AI. The voice was AI. The love felt real.
- Recovery scam follow-up: Grant paid lawyers $3,000 after losing $70,000, expecting help recovering his money. The lawyers were scammers too, operating from the same criminal network. Both experts Grant later consulted warn that fake recovery services are often run by the same organizations behind the original scam. If someone promises to recover your romance scam losses for an upfront fee, that is the next scam.
The Six Red Flags That AI Romance Scams Leave Behind
| Red Flag | What It Looks Like | Why AI Creates It |
|---|---|---|
| Refuses unscripted interaction | Always available for planned video calls but avoids surprise calls or spontaneous video. Calls end when unexpected questions arise. | Real-time deepfake video is computationally intensive and can fail under unexpected conditions. Scripted conversations are harder to maintain without preparation. |
| Moves to financial topics unnaturally fast | Within weeks: investment tips, mentions of money problems, references to a windfall you could share in. The relationship emotional timeline doesn't match the financial ask timeline. | The entire operation exists to extract money. LLM scripts are designed to introduce financial pathways as early as the conversation allows without triggering suspicion. |
| Too available, too understanding, too perfect | Responds at 2am. Never gets irritated. Always says the right thing. Remembers every detail. Has no bad days. | AI can be infinitely patient and available. Real people have bad days, contradictions, and emotional variation. Mechanical consistency across months is a tell. |
| Shares no casual, imperfect photos | Every photo is posed, well-lit, flattering. No candid shots. No bad angles. No photos with other people. | AI image generation struggles with consistent group photos and improvised contexts. Realistic AI images tend to be individual, posed, and technically perfect. |
| Emergency payment in unusual forms | Requests wire transfer, cryptocurrency, or gift cards — never bank transfer. Urgency: 'I need it today or I'll lose the opportunity.' | Irreversible payment methods prevent recovery. Urgency bypasses the rational evaluation that would end the scam. |
| Story details drift over time | Their job changes slightly across months. Family members appear and disappear from the narrative. Location details shift. | LLM scripts manage hundreds of concurrent conversations. Context tracking across months-long interactions at this scale produces inconsistencies that a single human could avoid. |
The Test That Catches Every AI Romance Scam
Veteran matchmaker Amber Lee offers the most practical test available: insist on a spontaneous video call. Not a scheduled one — a spontaneous one, with no preparation time. Text the person and say 'Can you video call me right now?' and see what happens. Real people can do this. Real-time deepfake video, while advancing rapidly, still struggles with unscripted interaction on zero notice. A request for 'just five more minutes' before every video call, combined with calls that end suspiciously quickly when unplanned questions arise, is one of the clearest tells available in 2026.
The FTC and cybersecurity firms now universally recommend a second layer of protection: a family safe word. Agree on a phrase that only your real family knows — something nonsensical, like 'Purple Cactus' or 'Midnight Protocol' — that would never appear in any of your public social media posts, voicemails, or videos. If someone calls claiming to be a family member in distress, ask for the safe word before anything else. An AI clone trained on your family member's voice and public posts cannot guess a password it was never given.
If You Think You've Been Targeted: The Exact Steps
- Stop all contact immediately — do not send more money, do not explain why you're stopping, do not give them the opportunity to re-engage your emotions. Every additional conversation is another opportunity for the operation to deepen its hold.
- Contact your bank immediately — if you sent money in the last 24-48 hours via wire or bank transfer, there is sometimes a window to recall it. Call the fraud line of your bank before filing any reports.
- Report to the FBI's Internet Crime Complaint Center at ic3.gov — this is the primary federal database for these cases, and reports build the investigative record that occasionally leads to prosecution. You will not be judged.
- Report to the FTC at ReportFraud.ftc.gov — FTC reports feed enforcement actions and public alerts. Your report can prevent the next victim from losing their savings.
- Do not pay recovery services — any service promising to recover your romance scam losses for an upfront fee is almost certainly a follow-on scam from the same criminal network. This is confirmed by every fraud expert who tracks these cases.
- Seek support before isolating — the shame that keeps 55% of victims silent is exactly what scammers depend on. AARP's Fraud Watch Network Helpline (877-908-3360) is specifically designed to support romance scam victims without judgment.
Grant ended his account by saying he agreed to speak publicly, with his identity protected, because he knows what it feels like to want a connection so badly that you miss the warning signs. That is not a failure of intelligence. Former FBI undercover operative Eric O'Neill — who has tracked romance scammers professionally — is explicit: these operations exploit a fundamental human need that has nothing to do with how smart you are. The desire for connection is not a vulnerability. The people who exploit it are.
📚 Read Next
t.You Report romance scam losses at ic3.gov and ReportFraud.ftc.gov. If you want to understand the AI technology being used against you — and test how different AI models respond to the same prompts — LumiChats gives you access to Claude Sonnet 4.6 and GPT-5.4 side by side, so you understand exactly what these systems can and can't do.