To begin, this post has nothing to do with politics or personal beliefs. It has everything to do with truth.
The world we live in runs on images and video. Whether it’s news breaking in real time, courtroom exhibits, or even what pops up on your FYP, people are conditioned to believe what they see. But the truth is, not everything you see is authentic. This has always been the case to some degree, but the rise of AI-generated media has taken things to an entirely different level.
Recently, political commentator Charlie Kirk was killed in a shooting. Surveillance footage quickly surfaced. It wasn’t cinematic-quality video, but it was usable. Grainy, yes. But consistent with what you’d expect from a fixed surveillance system.
Before long, though, other versions of the suspect (now identified as Tyler Robinson, age 22) started appearing on Facebook Reels and YouTube Shorts. These “surveillance clips” went viral because most of America was following this story, and a national manhunt was underway. But a closer look showed they were not authentic at all. Instead, they were AI-generated “enhancements.” On the surface, they looked sharper and more appealing to the average eye. But to me, as someone who has spent years studying digital forensics, they immediately raised red flags.
I want to unpack why those AI images looked convincing, why they weren’t authentic, and why the process of authenticating media is no longer optional, it’s necessary. The stakes are too high to treat images at “face” value (see what I did there?).
Why AI Media Slips Past the Average Eye
AI tools today can produce images and video that look extremely realistic. Faces are smoothed, lighting is balanced, and the entire frame looks like something pulled from a professional production. For most people, this feels more “real” than the noisy, low-resolution frames that come from a security camera. Human brains naturally lean toward clarity. We equate sharpness with truth. That’s the trap.
The surveillance stills showed Robinson in a hat and shirt similar to the grainy photos released by the FBI. But the graphics on those items in the authentic footage were smudged and unclear, which makes sense. Cameras don’t always capture fine detail at distance. The AI-generated versions, though, made bold choices. Clear logos, sharp designs, and defined features that simply weren’t there in reality. The creators didn’t just clean up the footage. They hallucinated details that never existed. That’s where the danger lies. The suspect in custody doesn’t look like the polished AI images, because those weren’t authentic to begin with.
The Devil is in the Details
The difference between truth and fabrication often lives in the margins. Think about that hat and shirt again. In the authentic images, the graphics are fuzzy. Maybe you can guess what they are, maybe not (to me, the shirt looked to be an American flag with an eagle). In the AI versions, you’re not guessing anymore. The designs are clean and obvious. Except… they’re wrong. That small change can alter public perception in a huge way. Suddenly, people online believe Robinson wore a shirt that might carry a political symbol. Or a hat connected to a certain group. Those assumptions spread fast. Now misinformation is shaping opinion before the facts are verified.
Authenticating media means catching those kinds of discrepancies. It’s about asking: does this image reflect the true conditions of the source? Does it align with what we know about how that camera captures detail? Or is it too sharp, too clear, too good to be true? When I looked at those AI images, it was immediately obvious that they strayed from what a real surveillance camera could produce. But for someone scrolling quickly through a feed, that distinction might not register.
The Speed of Misinformation
One of the biggest problems with AI media is the speed at which it spreads. Within hours of the surveillance footage being released, AI-generated “versions” of Robinson had already gone viral. Social media rewards content that grabs attention. High-quality, clear, “better” images spread more easily than grainy surveillance stills. That’s why people shared them without thinking. The irony is that the more authentic something looks, the less authentic it might actually be.
For investigators, attorneys, journalists, and even the public, this presents a challenge. Once a fake image takes hold, it’s difficult to put the genie back in the bottle. Even if you later prove it’s false, the initial impact lingers. People remember the first image they saw, not the correction that followed.
Why Authentication is Essential in Forensics
As a digital forensic examiner, my job is to separate fact from fiction. Authenticating media means running evidence through a process that examines not just what the image shows, but how it came to be. That includes:
Metadata analysis: Looking at the hidden information embedded in files to determine origin, device, and whether edits occurred.
Error level analysis: Identifying inconsistencies in compression that can reveal tampering.
Contextual analysis: Comparing what the image shows with what the source device is capable of capturing.
Comparative analysis: Cross-checking images against known authentic sources, such as surveillance frames or verified photographs.
In this case, the authentic surveillance stills of Robinson are the baseline. Anything that deviates in clarity, design, or features beyond what those cameras could produce should be treated as suspect.
How Misinformation Impacts Justice
Imagine this situation in a courtroom. If an AI-generated image of Robinson were introduced, it could influence a jury, sway an argument, or cast doubt on legitimate evidence. Even outside of trial, the court of public opinion is powerful. If people believe an AI image is the real thing, it can pressure investigators, taint potential jurors, and create chaos in the process of seeking justice.
This is why attorneys, insurers, and anyone dealing with litigation must recognize the risk. Authenticating media isn’t just technical housekeeping. It’s about protecting the integrity of a case. A single fake image can undo months of solid investigative work.
The Role of Experts
Technology is only part of the equation. The other part is expertise. Tools can flag anomalies, but a trained examiner interprets what those anomalies mean. It takes experience to look at an AI-generated image and say with confidence: this detail doesn’t align with how authentic evidence behaves. That level of interpretation is what ensures all parties (yes, even the accused) get the truth, not a polished version of it.
Lessons From This Case
The Charlie Kirk case highlights a few truths about our digital environment:
AI media is convincing: To the untrained eye, it looks better than reality.
Authenticity isn’t always attractive: Grainy, imperfect surveillance is often the truest record.
Details tell the story: A hat logo or shirt graphic can make the difference between truth and fabrication.
Misinformation spreads faster than corrections: Once people see a fake, it’s hard to walk it back.
Authentication is non-negotiable: Without it, we’re at the mercy of fabricated narratives.
Bringing It Back to Everyday Cases
You might think this only applies to high-profile political incidents. It doesn’t. In distracted driving cases, workplace disputes, or insurance fraud investigations, the same principles apply. If a single AI-generated image or video clip makes its way into the evidence pool, it can steer the case in the wrong direction. Attorneys must be vigilant. Judges and juries need to be aware. And forensic examiners have to provide clarity.
A Call to Action
The solution isn’t to distrust everything. The solution is to treat digital media with the same scrutiny we apply to physical evidence. Chain of custody, provenance, and authentication are the cornerstones. Just because something looks real doesn’t mean it is. And just because something looks blurry doesn’t mean it’s false. The truth often hides in the imperfections.
That’s why authenticating media isn’t just technical work. It’s about upholding justice, ensuring accurate reporting, and protecting the public from misinformation. It’s about recognizing that in a world full of polished fakes, the grainy truth is more valuable than ever.
Closing Out
Tyler Robinson has now been apprehended. That fact alone dismantles the viral AI images that tried to shape the narrative. But this won’t be the last time it happens. AI tools are getting stronger, faster, and more accessible. What’s at stake is nothing less than trust — in courts, in journalism, in public institutions. Without authentication, that trust erodes.
So when you see an image online, ask yourself: is this too good to be true? If it is, dig deeper. Because the truth is worth the effort.
Eric Kelley is the founder of Mako Forensics and a seasoned digital forensic examiner with over 26 years of law enforcement experience. Specializing in mobile, computer, and vehicle forensics, Eric has conducted thousands of investigations and played a key role in high-profile cases. With a commitment to precision and objectivity, he helps attorneys, investigators, and businesses uncover the truth through expert forensic analysis.
Most of us have fallen into the rabbit hole of short videos on our phones. It usually starts with a quick clip from some documentary, then another one pops up, then another, and before long twenty minutes (...or two hours) have disappeared.
Every now and then, one of those clips pulls you into a moment that shaped the entire country. A shaky recording. A grainy camera. A video that changed the world before most of us were born.
It makes you think. We have the most advanced technology humanity has ever created sitting in our pockets, yet some of the most important evidence in American history was captured on cameras that would struggle to compete with a child’s toy today.
The world of digital evidence has changed faster than most courts, or attorneys, realize. Artificial intelligence is rewriting the rules of what’s real, and the legal system is struggling to catch up.
At Mako Forensics, we live in that gray zone where digital truth and synthetic fabrication collide. My background in law enforcement and forensic examination has taught me one thing above all: the truth doesn’t fear inspection.
But in the age of generative AI, “truth” now demands deeper inspection than ever before.
When “Is it real?” really matters: How to authenticate media in the age of AI
In my years in law enforcement and digital forensics I’ve learned a simple truth: evidence only matters if you can trust it.
It could be a phone video capturing a crash, a vehicle’s dashcam recording showing distracted driving, or a “surveillance” image someone has sent you; the integrity of the media can make or break a case.
Today we face a new challenge: content not just being manipulated, but wholly generated by artificial intelligence.
So the question becomes: how do you know the media is real? How do you know it hasn’t been AI-generated or altered in a way that undermines its value? Plainly put... is it freakin' real or not?
Home > Our Blog > Grainy Truth vs Polished Lies: The Hunt for Charlie Kirk’s Killer
Grainy Truth vs Polished Lies: The Hunt for Charlie Kirk’s Killer
To begin, this post has nothing to do with politics or personal beliefs. It has everything to do with truth.
The world we live in runs on images and video. Whether it’s news breaking in real time, courtroom exhibits, or even what pops up on your FYP, people are conditioned to believe what they see. But the truth is, not everything you see is authentic. This has always been the case to some degree, but the rise of AI-generated media has taken things to an entirely different level.
Recently, political commentator Charlie Kirk was killed in a shooting. Surveillance footage quickly surfaced. It wasn’t cinematic-quality video, but it was usable. Grainy, yes. But consistent with what you’d expect from a fixed surveillance system.
Before long, though, other versions of the suspect (now identified as Tyler Robinson, age 22) started appearing on Facebook Reels and YouTube Shorts. These “surveillance clips” went viral because most of America was following this story, and a national manhunt was underway. But a closer look showed they were not authentic at all. Instead, they were AI-generated “enhancements.” On the surface, they looked sharper and more appealing to the average eye. But to me, as someone who has spent years studying digital forensics, they immediately raised red flags.
I want to unpack why those AI images looked convincing, why they weren’t authentic, and why the process of authenticating media is no longer optional, it’s necessary. The stakes are too high to treat images at “face” value (see what I did there?).
Why AI Media Slips Past the Average Eye
AI tools today can produce images and video that look extremely realistic. Faces are smoothed, lighting is balanced, and the entire frame looks like something pulled from a professional production. For most people, this feels more “real” than the noisy, low-resolution frames that come from a security camera. Human brains naturally lean toward clarity. We equate sharpness with truth. That’s the trap.
The surveillance stills showed Robinson in a hat and shirt similar to the grainy photos released by the FBI. But the graphics on those items in the authentic footage were smudged and unclear, which makes sense. Cameras don’t always capture fine detail at distance. The AI-generated versions, though, made bold choices. Clear logos, sharp designs, and defined features that simply weren’t there in reality. The creators didn’t just clean up the footage. They hallucinated details that never existed. That’s where the danger lies. The suspect in custody doesn’t look like the polished AI images, because those weren’t authentic to begin with.
The Devil is in the Details
The difference between truth and fabrication often lives in the margins. Think about that hat and shirt again. In the authentic images, the graphics are fuzzy. Maybe you can guess what they are, maybe not (to me, the shirt looked to be an American flag with an eagle). In the AI versions, you’re not guessing anymore. The designs are clean and obvious. Except… they’re wrong. That small change can alter public perception in a huge way. Suddenly, people online believe Robinson wore a shirt that might carry a political symbol. Or a hat connected to a certain group. Those assumptions spread fast. Now misinformation is shaping opinion before the facts are verified.
Authenticating media means catching those kinds of discrepancies. It’s about asking: does this image reflect the true conditions of the source? Does it align with what we know about how that camera captures detail? Or is it too sharp, too clear, too good to be true? When I looked at those AI images, it was immediately obvious that they strayed from what a real surveillance camera could produce. But for someone scrolling quickly through a feed, that distinction might not register.
The Speed of Misinformation
One of the biggest problems with AI media is the speed at which it spreads. Within hours of the surveillance footage being released, AI-generated “versions” of Robinson had already gone viral. Social media rewards content that grabs attention. High-quality, clear, “better” images spread more easily than grainy surveillance stills. That’s why people shared them without thinking. The irony is that the more authentic something looks, the less authentic it might actually be.
For investigators, attorneys, journalists, and even the public, this presents a challenge. Once a fake image takes hold, it’s difficult to put the genie back in the bottle. Even if you later prove it’s false, the initial impact lingers. People remember the first image they saw, not the correction that followed.
Why Authentication is Essential in Forensics
As a digital forensic examiner, my job is to separate fact from fiction. Authenticating media means running evidence through a process that examines not just what the image shows, but how it came to be. That includes:
In this case, the authentic surveillance stills of Robinson are the baseline. Anything that deviates in clarity, design, or features beyond what those cameras could produce should be treated as suspect.
How Misinformation Impacts Justice
Imagine this situation in a courtroom. If an AI-generated image of Robinson were introduced, it could influence a jury, sway an argument, or cast doubt on legitimate evidence. Even outside of trial, the court of public opinion is powerful. If people believe an AI image is the real thing, it can pressure investigators, taint potential jurors, and create chaos in the process of seeking justice.
This is why attorneys, insurers, and anyone dealing with litigation must recognize the risk. Authenticating media isn’t just technical housekeeping. It’s about protecting the integrity of a case. A single fake image can undo months of solid investigative work.
The Role of Experts
Technology is only part of the equation. The other part is expertise. Tools can flag anomalies, but a trained examiner interprets what those anomalies mean. It takes experience to look at an AI-generated image and say with confidence: this detail doesn’t align with how authentic evidence behaves. That level of interpretation is what ensures all parties (yes, even the accused) get the truth, not a polished version of it.
Lessons From This Case
The Charlie Kirk case highlights a few truths about our digital environment:
AI media is convincing: To the untrained eye, it looks better than reality.
Authenticity isn’t always attractive: Grainy, imperfect surveillance is often the truest record.
Details tell the story: A hat logo or shirt graphic can make the difference between truth and fabrication.
Misinformation spreads faster than corrections: Once people see a fake, it’s hard to walk it back.
Authentication is non-negotiable: Without it, we’re at the mercy of fabricated narratives.
Bringing It Back to Everyday Cases
You might think this only applies to high-profile political incidents. It doesn’t. In distracted driving cases, workplace disputes, or insurance fraud investigations, the same principles apply. If a single AI-generated image or video clip makes its way into the evidence pool, it can steer the case in the wrong direction. Attorneys must be vigilant. Judges and juries need to be aware. And forensic examiners have to provide clarity.
A Call to Action
The solution isn’t to distrust everything. The solution is to treat digital media with the same scrutiny we apply to physical evidence. Chain of custody, provenance, and authentication are the cornerstones. Just because something looks real doesn’t mean it is. And just because something looks blurry doesn’t mean it’s false. The truth often hides in the imperfections.
That’s why authenticating media isn’t just technical work. It’s about upholding justice, ensuring accurate reporting, and protecting the public from misinformation. It’s about recognizing that in a world full of polished fakes, the grainy truth is more valuable than ever.
Closing Out
Tyler Robinson has now been apprehended. That fact alone dismantles the viral AI images that tried to shape the narrative. But this won’t be the last time it happens. AI tools are getting stronger, faster, and more accessible. What’s at stake is nothing less than trust — in courts, in journalism, in public institutions. Without authentication, that trust erodes.
So when you see an image online, ask yourself: is this too good to be true? If it is, dig deeper. Because the truth is worth the effort.
Share With Others
MORE BLOG POSTS
Re-examining the Zapruder Film Through the Eyes of Modern AI
Deepfakes and Evidence: The New Authenticity Frontier
Is It Real? AI Authentication