Fakes, Filters, and Fire Damage: Detecting AI-Generated Fraud in Property Claims

Originally presented at the PLRB Annual Conference, 2026

It takes three seconds to generate a photorealistic image of a hail-damaged roof. The tools are free. No camera required. And it's already showing up in your claims queue.

At the 2026 PLRB Annual Conference, I presented a session on one of the fastest-moving threats in property insurance: AI-generated fraud in claims documentation. What follows is the substance of that presentation — a field guide for claims professionals, SIU teams, and carrier leadership who need to understand what's happening, how to catch it, and why human judgment still matters more than ever.

The Threat Is Current, Not Theoretical

The numbers frame the urgency. A photorealistic damage image can be generated in roughly three seconds using free, publicly available tools. The cost to produce unlimited fraudulent claim photos is effectively zero. And insurance fraud already accounts for an estimated ten percent of property and casualty losses — approximately $45 billion annually in the United States.

The inflection point came in 2023, when AI-generated image quality crossed what researchers call the "human detection threshold" — the point at which untrained observers can no longer reliably distinguish synthetic images from real photographs. Since then, the tools have only improved. Today's diffusion models (the technology behind tools like DALL-E, Midjourney, and Stable Diffusion) can produce damage images that are startlingly convincing at first glance.

For claims professionals, the implication is straightforward: if your workflow assumes that submitted photos are authentic because they look authentic, you have a gap.

How Generative AI Creates Fake Damage Photos

Understanding the technology helps explain its weaknesses — and those weaknesses are what make detection possible.

There are three main architectures behind AI-generated images. Generative Adversarial Networks (GANs) pit two neural networks against each other: one generates fake images, the other tries to detect them. Over millions of iterations, the generator becomes extremely good at producing realistic output. Diffusion models work differently — they add noise to real images, then learn to reverse the process, effectively "dreaming" new images from random noise. Hybrid and LLM-guided models combine text prompts with image generation, allowing someone to type a description like "hail-damaged roof in Texas" and receive a photorealistic result in seconds.

All three approaches share a critical limitation: they learn patterns, not physics. They know what damage looks like, but they don't understand how damage works. That gap between appearance and reality is where detection lives.

The 5 Tells Every Adjuster Should Know

Through our work with carriers and claims teams, we've identified five consistent signals that distinguish AI-generated damage photos from real ones. These aren't edge cases — they're systematic, learnable, and can be applied during a standard desk review.

1. Text Corruption

AI models struggle to render text accurately. In damage photos, this shows up on appliance labels, cabinet markings, container text, and any signage in the image. The text will often be scrambled, duplicated, or missing entirely. In one widely-circulated training case, a submitted kitchen photo showed cabinet labels reading "KIITCHENN" — a nonsensical string that no real manufacturer would produce. Real phone photos always have legible text. If you zoom in on a label and it's garbled, that's a red flag worth investigating.

2. Shadow Inconsistency

Light behaves predictably. In any real photograph, shadows fall in a consistent direction because they originate from the same source. AI-generated images frequently violate this rule, producing shadows that fall in contradictory directions within the same frame. The field test is simple: pick two objects in the photo and check whether their shadows agree on where the light is coming from. If they disagree, the image warrants further scrutiny.

3. Texture Tiling

When AI models need to fill large surfaces — floors, walls, roofs — they tend to repeat texture patches rather than generating natural variation. This produces identical floor tiles, cloned wood grain, or repeated shingle patterns that are too uniform to be real. Authentic building materials always have slight imperfections, color variation, and natural irregularity. Perfect repetition is AI's fingerprint.

4. Edge Warping

The places where objects meet are among the hardest elements for AI to render correctly. At boundaries — where a cabinet meets a wall, where a window frame meets glass, where debris sits on a surface — AI often produces impossible blends. Cabinets that melt into walls. Window frames that curve where they should be straight. Debris that appears to float rather than rest on a surface. Real damage follows physics. AI damage follows aesthetics.

5. Missing Metadata

Every photograph taken with a real phone camera contains EXIF data — the camera model, GPS coordinates, and timestamp embedded in the file. AI-generated images typically contain no metadata at all, or metadata that doesn't match the claimed location or time. This is one of the most reliable tells because it requires no visual judgment — it's a binary check. If a submitted photo has no EXIF data, or if the GPS coordinates place the camera in a different city than the claimed loss, that discrepancy demands investigation.

Case Study: The Burned Kitchen

This case has become a training example for fraud investigators at multiple major carriers. A claimant filed a $47,000 kitchen fire claim supported by twelve photos. During desk review — not through forensic analysis, not through special software — an alert reviewer flagged all twelve images as AI-generated.

The tells were textbook. Cabinet labels showed garbled text. Smoke patterns curved in ways that violated basic physics. Every appliance in the photos was missing brand logos. Floor tiles repeated in an identical pattern across images. And none of the twelve photos contained any EXIF metadata.

The lesson isn't that the technology is unbeatable. The lesson is that the tells are consistent. A reviewer who knows what to look for can catch what a casual observer would miss. The question for every claims organization is whether their desk reviewers have been trained to look.

Why Human Judgment Still Wins

Throughout the PLRB session, one theme resonated above everything else: AI detection tools are necessary, but they are not sufficient.

An AI can analyze a photo. A field adjuster can tell you whether the fire smell is fresh or staged. That gap is enormous, and it isn't closing anytime soon.

There are capabilities that remain exclusively human in the claims process. Sensory data — the smell of aged smoke versus fresh accelerant, moisture in the air, chemical odors that don't belong in the environment. Homeowner body language — hesitation when asked about the timeline, inconsistencies between the verbal account and the physical evidence. Neighborhood context — whether the claimed damage pattern matches the weather event, whether adjacent properties show similar loss. Material authenticity — tapping a wall to verify that stud damage matches the drywall condition, pulling back carpet to check whether the subfloor is actually wet. Policy interpretation — reading coverage nuances against observed damage in real time, making judgment calls that require both legal literacy and field instinct simultaneously.

And then there's the dimension that doesn't appear in any workflow diagram: trust and communication. Helping a distressed homeowner feel heard. Explaining what's covered and what isn't in terms that are human, not automated. This is the work that builds the carrier's reputation one claim at a time.

The Winning Formula: Human + Machine

The most effective claims workflow we've seen emerging across early-adopter carriers follows a five-step pattern.

The process begins when a claim is filed — photos, voice notes, and documents are submitted through existing intake channels. At that point, AI screening kicks in: image authenticity checks, metadata validation, and pattern detection run automatically against every submitted photo. The system then flags and routes — suspicious files are flagged for human review while clean files proceed through the standard workflow automatically. A field adjuster then conducts a human site inspection, now armed with a full AI analysis report that highlights which images were flagged and why. Finally, adjudication proceeds — policy review, coverage determination, and payout or escalation, with the full context of both automated analysis and field observation.

This isn't a future-state vision. Every component of this workflow exists today. The firms deploying it are processing legitimate claims faster, catching more fraud earlier, and giving their adjusters better tools — not replacing them.

What's Coming Next

The trajectory over the next twenty-four months is accelerating.

Right now, AI-generated image screening exists and works. Early adopters are already deploying it. The only question is whether your firm is among them.

By late 2026, voice and document AI will transform desk adjusting. Voice-recorded claim notes will be transcribed, structured, and compared against policy language automatically, enabling desk adjusters to multiply their throughput significantly.

By 2027, predictive fraud scoring will score claims at intake based on hundreds of signals simultaneously — location patterns, claimant history, weather data, contractor relationships, and image authenticity analysis, all combined into a single risk assessment.

By 2028 and beyond, straightforward claims that pass every verification checkpoint — verified images, matching weather event, consistent documentation, clean claimant history — will be approved and paid without human review. Complex and flagged claims will receive more human attention, not less, because the routine volume will have been automated away.

Building AI Literacy on Your Team

AI literacy for claims professionals isn't about learning to code. It's about knowing enough to ask the right questions and recognize when something doesn't add up.

At the foundational level, every adjuster should know that AI-generated damage photos exist, understand the five visual tells, and routinely request metadata verification on suspicious photos. At the practice level, teams should be using detection tools regularly, building internal libraries of red-flag patterns, and incorporating AI verification into every desk review workflow. At the mastery level, organizations should be running quarterly training on emerging generation techniques, feeding pattern findings back to their SIU teams, and designing firm-level detection workflows that improve with every claim processed.

What This Means for Your Organization

AI-generated fraud in property claims is not a future problem. It is a current operational risk that is growing in sophistication and scale. The tools to generate fraudulent documentation are free and accessible. The tools to detect them exist and are improving rapidly. And the claims professionals who learn to work alongside detection technology — combining machine precision with human judgment — are becoming the most valuable people in the building.

If your firm is evaluating AI detection capabilities, designing fraud workflow automation, or building claims technology strategy, we should talk. This is what we do at Populus Technology.

Serges Himbaza is the Managing Partner of Populus Technology, a Dallas-based enterprise AI and automation consultancy serving healthcare, insurance, construction, education, and government clients. He holds a degree in Econometrics and Quantitative Economics from Duke University, a Microsoft Dynamics 365 certification, and brings eight years of enterprise technology delivery experience across insurance, healthcare, financial services, and government sectors.

The full PLRB 2026 presentation is available on request.

Next
Next

Why the Best AI in the World Still Fails 76% of the Time— And What That Means for Your Claims Team