A lifelong battle just to know what's real
AI, CGI, conspiracies, misinformation, disinformation, Oh My
Exposure to convincing lies is nothing new. It’s been established for a long time that human beings can be tricked: our senses, beliefs, and morals are all up for grabs. As hypnotists and con men know, some are more susceptible than others. My Mom once related the story of her grandparents talking to the characters on the television when they first got one.
Many of us are familiar with the stories of the Mechanical Turk and The War of the Worlds radio broadcast. The 1770’s brought us the Mechanical Turk, fake automaton that played chess (a real chess player was hidden inside). Certainly it had skeptics but generally people were wowed by such a machine, believing they were seeing a real chess-playing robot.
In a similar way, the original radio broadcast of The War of the Worlds actually caused some listeners to believe it was real. Although the reported hysteria seems to have been overblown the fact remains that without being given a wider context (ie. that they were just listening a radio serial), people thought an actual invasion was occurring.
Conspiracies, what people believe
We like to think we’re more sophisticated than those people from a different era but the same things happen today. With the pervasive nature of today’s media it’s even harder to discern what’s real. We can expect that trend to worsen.
The effects of this exposure range from the blissful ignorance of someone whose problem was solved by a helpful “person” that was actually an AI, all the way to someone convinced that violence is the only solution to prevent the perceived war being waged against them and their cultural beliefs. It’s the latter, that “I don’t trust the gubmint, I’ll do my own research”, that pushes rational people into the fringes and extreme acts.
It’s not that they fully believe something silly, it’s that they have little to no trust in the person or institution providing the actual facts.
There are multiple things at play here:
brand new technology that is, in the words of Arthur C. Clarke, “indistinguishable from magic”
attempts by the untrained to make sense of incredibly complex and chaotic systems (eg. the economy, our metabolisms, worldwide climate)
susceptibility to disinformation, lack of education, or perhaps they really did just fall off the turnip truck
lack of trust in the person or institution providing the actual facts
poor communication and incorrect information that engenders a lack of trust
convincing disinformation provided by another person/institution, often to reinforce a specific agenda
actual science that is biased and cherry-picked to support industry and capitalism
Space.com’s article sums the lack of trust up nicely:
So if you find yourself talking to a flat-Earther, skip the evidence and arguments and ask yourself how you can build trust.
— Paul Sutter, Space.com
We see real world examples of the effects of disinformation and misinformation every day, whether it’s trusting our food and water, the all-to-common tale of the grandparents caught in the propaganda trap of Fox News, a quarter (!) of the UK believing that COVID was a hoax, or Samsung phone cameras inventing pixels in images of the moon. A quick run through twitter shows that it’s tough to know whether a ‘verified’ user is actually who they say they are. Twitter is already well known for how easily it can spread fake news.
The Future is here
Every single pixel will be generated soon. Not rendered: generated.
— Jensen Huang, NVIDIA CEO
That quote by Jensen Huang (NVIDIA makes graphics cards and is the 6th largest company in the world) hits at the core of what our online experience is becoming. It’s not just the pixels though. We are not very far off from our online and telephonic interactions being 100% fakeable. Even the best-trained eyes and ears will not be able to discern whether something is real or not. The advent of usable generative AI (eg. ChatGPT, and Midjourney) means that generated things start to not just look real but also make sense in context.
Protestants in Germany (knowingly) sat down for their first sermon delivered by an AI.
The UN sees “information integrity” and our ability to distinguish between disinformation and misinformation as crucial to human progress.
Writers in the 2023 WGA strike have grave concerns about studios using AI to write television shows: it’s coming.
Video deepfakes and voice cloning technology make it easy for anyone to “say” and do anything. Is that really the Pope wearing Balenciaga? Sure papi, sure it is.
Where does that leave us?
In the real world we should always be circumspect about a salesman at our door offering a great price for our house, a Craigslist car buyer wanting to pay with a money order, or a preacher with personal jets that asks for our money. Where do those skills apply when we find out that the news we watch was generated using an AI that was trained with a bias towards Big Oil, Big Pharma, and Big Cereal? What does it mean when the pop star we’ve been listening to and following on social media is entirely fake, songs and all?
If we say “I’m not going to trust anything online” what does that mean when we see important guidance from the scientists, government, or the police? If we say “I’m only going to trust information from X and Y” we end up in an information silo and part of those tribes whether they are truthful or not.
We aren’t all able to look for gold standard science (randomized control trial, large cohort, not paid for by giant companies) and it often does not exist.
Sometimes our institutions make preliminary or conflicting statements.
As has always been true, there are very large players using media to change public opinion and spread fear, uncertainty, and doubt. The facts they present are malleable.
The media - social media, traditional media - amplify everything. It’s their job.
Strong99’s Steps for Safe Online Reading, Liking, and Sharing
Think of this like the ‘Stop, Drop, and Roll’ of online information consumption (you can find other ones like this by searching for ‘How to spot online disinformation’):
Stop. I get it. You saw something that really excited you or pushed some of your buttons. Before you quickly like it or share it … just stop and take a step back.
Look. Look at the profile of whoever posted it. Look at their other posts. Hover over any hyperlinked URL’s in their bio. You are trying to get more context.
Read. Read the whole article they posted. Read the links in the article. Find out where they came from. If it’s a scientific study find out if it has been published in a reputable, peer-review journal. Use tools like Media bias chart, and science journal ratings here and at Google Scholar to help you understand more about the quality of your information:
Search. Use a search engine - not social media - to search more about the topic. Dig into the search results. Is story coming from multiple reputable sources?
Wait. As we saw, fake news travels fast. People really did think that was an actual picture of the Pope wearing a puffy jacket and they shared it in that context. Online disinformation campaigns rely on this viral response. That’s why you’re going to wait a little while before you like and share.
Trust but verify, indeed:
Our goal with Strong99 is to not just live longer but to thrive longer. I argue that to do that we need to be fully capable of navigating the online world throughout our lives. That means staying well-informed, well-versed in qualifying what we think we’re seeing and hearing, and vigilant in checking where information is coming from. Many times it means just plain waiting for more information and consensus instead of spreading the current story ourselves and making something worse.