top of page

Real or AI?

  • 9 minutes ago
  • 5 min read
Blown Away Guy – Maxell ad, via Wikipedia.org
Blown Away Guy – Maxell ad, via Wikipedia.org

“Is it real? Or is it Memorex?”


For those under fifty, the answer is probably a blank stare.


For the rest, I apologize for misremembering the slogan. Imagine my surprise when I found that it was actually “Is it live, or is it Memorex?” Do others remember it that way? Is this an example of the Mandela effect? Nah – I just remembered it wrong.


I need to apologize for the image as well, but this one was deliberate. I only vaguely remember the famous Memorex ad with Ella Fitzgerald, in which she breaks a glass with her voice, and then they break another glass with a recording of her voice.


But I certainly remember the “blown away guy” ad for Maxell – both the print ad above, and the famous TV commercial, in which “even after five hundred plays” the recording of The Ride of the Valkyries generates a literal wind, blowing the man’s hair, tie, and drink backwards. (Amusingly enough, the version of the commercial I found has lousy sound quality – a lot of “hiss”)


We don’t usually need to worry about the quality of storage media any more – digital is digital. I won’t bother to address those who believe that vinyl “sounds better”, or those who believe that gold-plated cables generate noticeably superior sound quality in digital equipment. (They don’t)


Nowadays, we need to worry about AI-generated content.


As always, I think it’s important to clarify terms. I’m not talking about tools which use AI in some way, such as a mixer which uses AI to improve the way it works, or tools which generate subtitles automatically.


I’m talking about content which is partly or wholly generated by AI. I mean songs created using a generative AI model, based on a text prompt of some sort. I mean text-to-image or text-to-video systems.


The key question is around honesty.


Setting aside the plethora of legal and ethical questions around how models are trained, and appropriate compensation for artists, and consent for using their work in such training, I think it’s vitally important to be open and honest around the degree to which AI is used in the generation of content.


In my opinion, if content is clearly and unambiguously identified as AI-generated, is not presented as something it is not, and doesn’t raise legal or ethical concerns, I am not particularly bothered by it (though, as noted above, the training process for many models raises serious ethical and legal questions...)


Unfortunately, we are seeing more and more content which is seriously problematic.


How often are we exposed to content which is AI-generated, but not identified as such? The simple answer is “we have no idea, but it’s a lot”. How do we even know? A few years ago, it was comparatively easy to identify AI-generated content, but things are changing so quickly that it’s already much harder and will soon be practically impossible to identify AI content without special equipment – ie, AI tools built to identify AI-generated content.


How often do we see deepfakes of living people, generated without their consent? How much is disinformation? Sometimes to mock, sometimes as satire, sometimes as comedy, but still often unethical and illegal, though legislation around AI ranges from non-existent, to inadequate, to obsolete.


So is the advice we receive. Telling someone to carefully examine the lighting or the way the fingers appear might possibly have been useful a few years ago, and may (possibly) help (in some cases) now, but next week? Next year? All bets are off.


Anyone who thinks that they will be able to identify AI-generated content with high confidence, simply by looking at it, is deluding themself. Also, many people consume content quickly, on a small screen, and don’t really care if it’s AI, so long as it is consistent with their views.


It would be wonderful to be able to say that there’s an easy solution to this problem, but the simple fact is that we’re now in a world where we cannot trust high-quality images or video, without corroboration.


There is hope, but it takes work.


First, critical thinking. Be aware of cognitive biases, particularly the ubiquitous confirmation bias, where we accept information that is consistent with our beliefs far more readily than that which contradicts them.


Second, evaluate your sources. It’s easy to tell people to check everything they see, and cross-check it, and try a reverse-image search, and so on, but there is simply no way to do this for everything, all the time. So, we need to learn to evaluate sources as well as content. Is Fox news reliable? And, if so, to what degree? What about the CBC? BBC? Learn about Bayesian reasoning, and develop your evaluations over time.


Third, do not “trust” anything posted to social media, without corroboration. As a recent example, I saw a post which said that Israeli PM Benjamin Netanyahu had been killed. Instead of simply accepting it, I immediately went looking for confirmation... and found a bunch of obscure, low-quality, or “obviously” fake news sites with the same headline. But nothing from any remotely credible international sources... Does anyone seriously think that PM Netanyahu’s death wouldn’t be front-page news on pretty much every outlet in the world?


Fourth, don’t assume that something which “looks real” is actually real. That advice is long dead, and is an example of the so-called toupée fallacy, in which people assume that, because they can identify a low-quality toupée, all toupées are bad, and they can easily identify anyone wearing one.


Fifth, while tools can help, don’t assume they are reliable. AI-generation and AI-identification represent a constant cat-and-mouse, where new tools will be built to identify AI, and new AI will be built to evade those tools.


This is the world we now live in. Even as we enjoy the benefits, we need to be aware of the risks, and figure out ways to mitigate them.


Recognize that reality is still true, and that video evidence can still be effective. Even if the level of confidence we can have in a single video is lower, there is value in multiple videos of the same event, from different sources, on different cameras, at different angles. Faking at that scale seems, at least for now, beyond the practical ability of AI to fake convincingly. Consider the recent murders of Renée Good and Alex Pretti, and how different videos were used to capture the events from multiple angles – faked footage was quickly identified and discounted, as were numerous accounts about the events.


So, there is hope. We just need to be vigilant.


Cheers!

Want to learn more?

Thanks for subscribing!

What do you think?

Thanks for submitting!

© 2026 by RG

88x31.png

TIL Technology by RG is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise specified. 

Please feel free to share, but provide attribution.

bottom of page