The world of digital evidence has changed faster than most courts, or attorneys, realize. Artificial intelligence is rewriting the rules of what’s real, and the legal system is struggling to catch up. Enter the age of deepfakes and synthetic media

At Mako Forensics, we live in that gray zone where digital truth and synthetic fabrication collide. My background in law enforcement and forensic examination has taught me one thing above all: the truth doesn’t fear inspection.

But in the age of generative AI, “truth” now demands deeper inspection than ever before.

The Scope of the Problem

It used to take a Hollywood-level budget to convincingly fake a video or audio recording. Not anymore. With AI tools now accessible to anyone with a laptop, synthetic media, often referred to as Deepfakes (videos, audio, and images of people saying or doing things they never did) can be produced in just minutes.

Courts and attorneys are finding that these tools are evolving faster than the detection technologies meant to expose them. That’s creating a massive challenge in evidentiary reliability. Today’s question isn’t just “Has this file been altered?” It’s “Was this file ever real to begin with?”

One of my mentor (although he doesn’t know yet) is Hany Farid, an long-time media expert. He talks about synthetic media here if you care to learn more from experts. 

Two Categories of AI-Generated Evidence

In my forensic experience, AI-related evidence generally falls into two categories. 

  1. Acknowledged AI-generated content: Cases where the use of AI is openly disclosed and documented, such as reconstructions or demonstrative visualizations prepared by experts.
  2. Unacknowledged AI-generated content: Files presented as authentic but secretly created or altered by AI like deepfakes, synthetic voice recordings, or fabricated images.

It’s that second category that keeps forensic professionals up at night. Because once the integrity of a single frame or syllable is in doubt, the credibility of the entire case can unravel. 

Why Traditional Authentication Rules Are Struggling

For decades, the gold standard for authenticity was straightforward: under Federal Rule of Evidence 901, a party only had to show “evidence sufficient to support a finding that the item is what the proponent claims it is.”

That worked when we were dealing with analog photos and videotapes. But when an AI model can generate a “real-looking” recording from a few sample images or voice clips, that low threshold falls apart.

And here’s where the danger multiplies. Even genuine evidence is now being dismissed under the “liar’s dividend”; the idea that anyone can claim real footage is fake simply because deepfakes exist. That dynamic undermines the foundation of justice itself: confidence in what’s presented as truth.

How Courts and Attorneys Are Responding To Deepfakes

We’re starting to see movement in the legal community.

  • Judicial guidance: The National Center for State Courts (NCSC) and Thomson Reuters Institute have published bench guides to help judges evaluate AI-generated evidence. These guides prompt questions about provenance, chain of custody, and signs of manipulation.
  • Evidentiary reforms: The federal Advisory Committee on Evidence Rules is exploring amendments to address machine-generated evidence, while states like Louisiana are enacting laws requiring attorneys to exercise due diligence before submitting digital media.
  • Exclusion trends: Some courts are now excluding videos or recordings outright when AI involvement isn’t clarified or authenticated through expert analysis.

Why This Matters to Your Clients

For attorneys, insurers, and corporate clients, synthetic media changes how cases must be built. It’s no longer enough to have a video or audio file that “looks” real. You must know where it came from, how it was processed, and who touched it along the way.

That’s where digital forensics plays a critical role. At Mako Forensics, our process includes:

  • Verifying the original source files and metadata
  • Identifying traces of AI processing or compression anomalies
  • Comparing signal inconsistencies to detect deepfake alterations
  • Preserving authentic digital evidence to prevent data degradation
  • Providing expert reports that withstand courtroom scrutiny

Best Practices in the Age of Synthetic Media

If your firm or company handles digital evidence, consider these steps:

  • Adopt an authenticity checklist. Ask whether the media file could have been altered or created by AI. Document the chain of custody from the moment of capture.
  • Engage a forensic examiner early. Early involvement prevents spoliation and strengthens admissibility.
  • Educate your team. Many litigators and adjusters still don’t realize how accessible AI manipulation has become.
  • Require disclosure. In contracts and discovery requests, require opposing parties to identify any use of AI tools in producing evidence.
  • Invest in verification. Authenticity isn’t a gut call; it’s a process rooted in science, technology, and experience.

The Bottom Line

We’ve entered a time when “seeing is believing” has lost its meaning. For the courts, for litigators, and for the forensic community, authenticity is the new battlefield.

At Mako Forensics, our mission is simple: to uncover the truth within the noise. Whether it’s a disputed video in a trucking accident case, a questionable social-media clip in civil litigation, or a manipulated image threatening someone’s reputation; our job is to separate what’s real from what’s not.

In the end, that’s what justice demands.

If you want to read more about the dangers of synthetic media and how it could effect the direction of public opinion, just read it here.