Bytes of Knowledge

Insights and expertise from real-world digital forensic professionals, sharing lessons learned from actual cases to inform, educate, and inspire.

Most of us have fallen into the rabbit hole of short videos on our phones. It usually starts with a quick clip from some documentary, then another one pops up, then another, and before long twenty minutes (...or two hours) have disappeared. Every now and then, one of those clips pulls you into a moment that shaped the entire country. A shaky recording. A grainy camera. A video that changed the world before most of us were born. It makes you think. We have the most advanced technology humanity has ever created sitting in our pockets, yet some of the most important evidence in American history was captured on cameras that would struggle to compete with a child’s toy today.
The world of digital evidence has changed faster than most courts, or attorneys, realize. Artificial intelligence is rewriting the rules of what’s real, and the legal system is struggling to catch up. At Mako Forensics, we live in that gray zone where digital truth and synthetic fabrication collide. My background in law enforcement and forensic examination has taught me one thing above all: the truth doesn’t fear inspection. But in the age of generative AI, “truth” now demands deeper inspection than ever before.
When “Is it real?” really matters: How to authenticate media in the age of AI In my years in law enforcement and digital forensics I’ve learned a simple truth: evidence only matters if you can trust it. It could be a phone video capturing a crash, a vehicle’s dashcam recording showing distracted driving, or a “surveillance” image someone has sent you; the integrity of the media can make or break a case. Today we face a new challenge: content not just being manipulated, but wholly generated by artificial intelligence. So the question becomes: how do you know the media is real? How do you know it hasn’t been AI-generated or altered in a way that undermines its value? Plainly put... is it freakin' real or not?
The world we live in runs on images and video. Whether it’s news breaking in real time, courtroom exhibits, or even what pops up on your FYP, people are conditioned to believe what they see. But the truth is, not everything you see is authentic. This has always been the case to some degree, but the rise of AI-generated media has taken things to an entirely different level.
Remember the days when a photo really was worth a thousand words? When a blurry surveillance stillshot could make or break a case? When hearing someone’s voice on a recording was enough to convince a jury? Well I hate to say it, but those days are gone. We now live in a time where reality can be rendered, and fiction can be faked with jaw dropping precision. The rise of AI-generated media like deepfakes, voice clones, synthetic images, and video, has forced those of us in the digital forensic trenches to ask a hard question: Can we still trust what we see and hear?
Every distracted driving case comes with a critical question: Was the driver using their phone at the time of the crash? The problem? Many attorneys rely on surface-level records or self-reported statements. If this happens, they can miss crucial digital artifacts that could make or break the case. Here are 3 things I see attorneys overlook far too often: