0% found this document useful (0 votes)
25 views69 pages

Understanding the Post-Truth Era

Uploaded by

Rifat Jamil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views69 pages

Understanding the Post-Truth Era

Uploaded by

Rifat Jamil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

In times of universal deceit, telling the truth will be a

revolutionary act.
—George Orwell

In November 2016, the word “post-truth” became really popular when Oxford Dictionaries
chose it as the word of the year. The use of the word had jumped by 2,000% compared to the year
before. Other words that were also considered included “alt-right” and “Brexiteer,” showing how
much politics influenced the choice. “Post-truth” seemed to fit what was happening at the time,
especially with the Brexit vote and the US presidential election. People were ignoring facts,
using bad reasoning, and outright lying. For example, Donald Trump claimed—without any
proof—that the election would be “rigged” if he lost. It made people wonder if facts even
mattered anymore.

After the election, things got worse. Trump said, again with no proof, that he actually won the
popular vote, even though Hillary Clinton had 3 million more votes than him. He claimed this
was because “millions of illegal votes” were cast. He also denied that Russia hacked the election,
even though 17 US intelligence agencies said they did. One of Trump’s team members even said
that “there’s no such thing, unfortunately, anymore as facts.”

When Trump became president on January 20, 2017, he made more false claims. He said his win
was the biggest since Reagan’s (it wasn’t), that the crowd at his inauguration was the largest ever
(photos and subway records showed it wasn’t), and that people gave him a standing ovation at
the CIA (this didn’t happen). In February, he falsely claimed that the US murder rate was at its
highest in 47 years, even though FBI data showed it was near a record low. This lie was
especially bad because he had already made a similar false statement during the Republican
convention to push the idea that crime was rising.

One might imagine a no less chilling exchange in the basement of the Ministry of Love in the
pages of George Orwell’s dystopian novel 1984. Indeed, some now worry that
we are well on our way to fulfilling that dark vision, where
truth is the first casualty in the establishment of the authoritarian state.
The Oxford Dictionaries define “post-truth” as situations where emotions and personal beliefs
are more important than actual facts when shaping people’s opinions. The word “post” doesn’t
mean we’re “past” truth like we’re “past” a war, but that truth has been pushed aside and doesn’t
matter anymore. This idea upsets many philosophers, but it’s not just a debate for academics.
Back in 2005, Stephen Colbert came up with the word “truthiness.” It means believing
something feels true, even if there’s no real evidence. He made it to mock how George W. Bush
relied on his “gut” for big decisions, like picking Harriet Miers for the Supreme Court or going
to war in Iraq without solid proof of weapons of mass destruction. At the time, “truthiness” was
a joke. Now, it doesn’t seem so funny.

During the Brexit campaign in the UK, there were buses with a false claim that the country was
sending 350 million euros a week to the EU. This was just one example of how facts were
ignored to sway public opinion. In countries like Hungary, Russia, and Turkey, politicians have
been using fake information to influence their own people. Many believe this “post-truth” era is
about twisting reality to fit opinions rather than letting facts shape beliefs. It’s not always about
denying facts but about picking and presenting them in a way that supports a certain viewpoint.
This might explain what Kellyanne Conway meant by “alternative facts” when defending Sean
Spicer’s claims about the crowd size at Trump’s inauguration. Trump was upset when official
photos showed a lot of empty seats, and his team tried to spin it differently.

So, is “post-truth” just about lying or twisting facts? Not exactly. The term “post-truth” is more
than just political spin. It’s a way for people who value truth to show their worry that truth itself
is being undermined. But what about those who claim they’re just sharing the “other side of the
story” or that “alternative facts” have a place? The idea of one absolute truth has always been
debated. Does questioning it make someone conservative? Liberal? Or is it a mix of ideas?
Decades ago, left-wing thinkers like postmodernists criticized the idea of objective truth, and
now some believe right-wing groups have used those ideas for their own purposes.

The idea of truth has been around since ancient times. Plato, through Socrates, warned about
the dangers of false knowledge. Socrates believed ignorance could be fixed—people can learn.
The bigger problem, he said, is when someone falsely believes they already know the truth
because they might act on something that’s wrong.

Aristotle gave a simple definition of truth: if you say something exists when it doesn’t, or say
something doesn’t exist when it does, that’s false. But if what you say matches reality, it’s true.
Philosophers have debated this definition for centuries. Some argue that truth depends on how
well it fits reality (called the “correspondence” theory). Others have different ideas, like truth
being about consistency (coherentist) or usefulness (pragmatist). Despite these debates, most
agree that truth is essential and worth defending.

Right now, the focus isn’t on figuring out the perfect definition of truth but on understanding
the ways people twist or ignore it. Sometimes, we say things that aren’t true simply because we
made a mistake. That’s called a “falsehood,” and it’s different from a lie because there’s no intent
to deceive. Then there’s “willful ignorance,” where someone doesn’t know if something is true
but says it anyway without bothering to check. In this case, we can blame the person for being
lazy, especially if the truth is easy to find.

Lying takes it further—it’s when someone knowingly says something false to trick others. This
is a big deal because lying is about intentionally fooling someone. Lies always have an audience;
if no one is listening or no one will believe it, we might not call it lying. But if the goal is to
make someone believe something false, it crosses the line from simply misinterpreting facts to
outright falsifying them. Is that what “post-truth” is really about?

The shift between these stages isn’t always clear—it’s easy to slide from one to another. For
example, when Trump first claimed there were no pre-inauguration talks between his National
Security Advisor and Russian officials, it might have been “willful ignorance.” But when
intelligence agencies confirmed they had briefed him about it, and he still denied it for two
more weeks, it started to look intentional. Another example is Trump’s repeated claim that he
lost the popular vote because of millions of illegal ballots. Just three days into his presidency,
The New York Times boldly called it what it was—a lie.

There are different ways people relate to truth, and not all of them involve outright lying.
Philosopher Harry Frankfurt, in his book On Bullshit, says that when someone is "bullshitting,"
they’re not necessarily lying. Instead, they just don’t care whether what they’re saying is true.
Could this be what Trump is doing?

Sometimes, people manipulate truth for their own benefit. For example, when Newt Gingrich
said that how people feel about the murder rate is more important than actual FBI statistics, it
seemed cynical. This kind of behavior helps enable “post-truth.” Then there are politicians who
“spin” the truth to make it work in their favor. They know what they’re doing, and so does their
audience. This goes beyond bullshit—it’s about influencing others on purpose.

Post-truth can also take an extreme form when someone actually believes a lie, even when all
credible evidence says otherwise. In the worst cases, people think that how others react to a lie
can somehow make it true. This raises debates about where Trump fits: is he lying, careless,
cynical, or even delusional? Whatever the case, these attitudes toward truth are part of the
post-truth problem.

As a philosopher, I find all these forms of post-truth troubling. Even though we can categorize
them—ignorance, lying, cynicism, spin, or delusion—none of them should be acceptable to
those who care about truth. What’s new in the post-truth era isn’t just people misunderstanding
or lying about reality; it’s the denial of reality itself. This is dangerous. A mistaken individual
might suffer the consequences of their belief, like thinking a fake medicine will cure their
illness. But when leaders—or large parts of society—deny basic facts, the results can be
catastrophic.

When South African President Thabo Mbeki claimed that antiretroviral drugs were part of a
Western plot and suggested garlic and lemon juice could treat AIDS, it led to the deaths of over
300,000 people. Similarly, when President Trump said that climate change is a hoax created by
China to harm the U.S. economy, the long-term effects could be just as disastrous, if not worse.

The bigger issue isn’t just these specific beliefs but the mindset that allows people to treat some
facts as more important than others based on what they want to be true. Climate change deniers,
for instance, don’t reject all facts—they selectively accept only the ones that support their views.
Like conspiracy theorists, they use a double standard: they distrust climate scientists without
evidence, claiming a global conspiracy, but happily pick out isolated statistics that seem to
support their claims.

This attitude shows extreme skepticism toward facts they don’t like and blind trust in facts that
fit their beliefs. It’s not about giving up on facts altogether—it’s about twisting the way facts are
gathered and used to fit a personal agenda. This kind of thinking undermines the idea that some
truths exist no matter how we feel about them. For our own good—and for the good of those
making policies—it’s crucial to seek out and rely on those truths.
I’ve talked about “respecting truth” as following methods like science that have helped us find
real answers in the past. If someone says truth doesn’t matter or doesn’t exist, there’s not much
to say to convince them. But is that what post-truth is really about?

Post-truth isn’t claiming that truth doesn’t exist. Instead, it’s about putting feelings or political
beliefs above facts. The Oxford definition says post-truth is when emotions matter more than
evidence. But why does this happen? People don’t argue against clear facts for no reason—they
do it when those facts threaten their beliefs or agenda. Sometimes, they challenge facts to
protect their own views, even if it means lying to themselves. This happens because they care
more about pushing their beliefs than accepting the truth. Post-truth, then, is like forcing an
idea on others, whether there’s good proof for it or not. It’s a way to gain power by controlling
what others believe.

But this way of thinking can and should be questioned. Do we want policies based on feelings
instead of facts? Humans may naturally lean toward emotions and fears, but we can learn to
value solid evidence instead. Even though there are big questions about knowing the full truth,
no one argues with facts when it really matters—like seeing a doctor when they’re sick.
Similarly, governments shouldn’t act based on feelings, like building more prisons just because
it “feels” like crime is rising. Instead, decisions should rely on reality, not emotions.

So, what can we do about post-truth? The first step is to understand where it comes from. Some
people think post-truth just showed up in 2016 because of Brexit or the US election, but it’s
actually been around for a long time. The term might be new, but the problem goes back
thousands of years to how humans think irrationally, which affects both liberals and
conservatives. It also comes from academic debates that question if objective truth even exists,
which have been used to undermine trust in science. On top of that, changes in how the media
works have made things worse.

We’ve seen similar tactics in the rise of science denial over the past 20 years—on topics like
climate change, vaccines, and evolution. These same tricks are now being used to attack not just
science but facts in general. It’s not just about disagreeing with a scientific theory anymore; it’s
about ignoring clear evidence, like a photo or a video, when it doesn’t fit someone’s beliefs.

Post-truth can seem confusing, but it’s not impossible to understand. It’s also not just about one
person, like Trump. The problem is bigger than any single leader because it’s in us as much as
it’s in our leaders. Post-truth is the result of years of buildup, not something caused by a single
event. If we want to fight it, we need to look at the deeper reasons behind it, not just blame
recent events like Brexit or the US election. Those were symptoms, not the root cause.
SCIENCE DENIAL AS A ROAD
MAP FOR UNDERSTANDING
POST-TRUTH

When the facts change I change my mind, sir. What do


you do?
—John Maynard Keynes

The rise of post-truth thinking has a clear parallel in how science has been treated over the past
few decades. Once held in high regard for its rigorous methods and reliability, science is now
often questioned by nonexperts who disagree with its conclusions—frequently for ideological
reasons. This skepticism isn’t about the natural and necessary scientific process of critique and
refinement, which involves peer review, replication attempts, and empirical testing. Instead, it’s
part of a broader phenomenon known as science denialism.

Science thrives on scrutiny. Mistakes or biases, such as undisclosed funding sources or conflicts
of interest, are taken very seriously within the scientific community, precisely because
maintaining the credibility of scientific work depends on this level of self-regulation. Failures
are exposed, and good work is separated from bad through a brutal but necessary process.

So why, despite these safeguards, do some nonexperts challenge scientific findings? It’s not
because they think scientists are careless or sloppy. Rather, it’s often because the conclusions of
science threaten their ideological worldviews. Science denialism arises when people reject
scientific findings not based on evidence or logic but because the findings conflict with their
beliefs. To justify their rejection, they question the motives and competence of scientists,
casting doubt on the entire scientific process.

This deliberate undermining of science is less about genuine skepticism and more about
protecting preconceived narratives. It reflects a broader trend where facts are subordinated to
beliefs—laying the groundwork for the post-truth era.

One of the most prevalent strategies of science denialism is to accuse scientists of bias,
especially when their findings conflict with certain ideological, religious, or political beliefs.
While this critique might appear to reflect a respect for scientific rigor, it often serves as a tactic
to undermine trust in the scientific process altogether. Denialists frame their rejection of
evidence as a pursuit of "openness" and "fairness," but the underlying intent is typically to erode
the perception of science as neutral and objective. This paves the way for introducing
"alternative" theories that align with their worldview, no matter how unsupported by evidence
these alternatives may be.

A more insidious approach targets the methods and reasoning of science itself. Some critics
claim that scientists fail to meet proper scientific standards, accusing them of being
closed-minded or blinded by their own interests. This is often based on a misunderstanding—or
deliberate misrepresentation—of how science actually operates. For example, science does not
deal in absolute proof but rather in strong, evidence-based justification. A theory, no matter
how well-supported, remains subject to revision or falsification if new data arise. This humility
is a strength of the scientific method, as it allows for continuous refinement and self-correction.

However, denialists exploit this inherent openness by twisting it into a perceived weakness.
They argue that if no scientific theory is ever "proven," then all competing theories must remain
valid possibilities. This line of reasoning allows them to demand equal consideration for their
ideologically driven theories, even when these lack empirical support. By presenting themselves
as champions of "true" science, they create a false equivalence between robustly tested scientific
conclusions and speculative or discredited alternatives.

Ultimately, this tactic does not promote genuine scientific inquiry but undermines the very
foundations of evidence-based reasoning. It transforms the rigorous process of science into a
battlefield for ideological agendas, blurring the line between justified belief and unsupported
conjecture.

I believe science shouldn’t be ashamed of how it works, but should instead see it as a strength in
the search for truth. Saying that a scientific theory is well-supported by evidence is a big deal. If
we’re going to focus on "evidence" instead of trying to "prove" things, then the people who
believe in non-scientific ideas should also show their "evidence." When you ask them for
evidence, they usually can’t provide it. For people who don’t understand how science works, it
might seem like a weakness that science can’t "prove" things like evolution. But, in reality, you
can’t even "prove" that the Earth is round—science works with strong evidence, not absolute
proof.
A clear example of this is climate change. There’s no real debate in science about whether the
global temperature is rising and whether humans are causing it. But the public has been tricked
into thinking there’s a big argument in science about it. This is something that has been
explained well by others, so I’ll just give a brief summary. My main point here is that science
denial is key to understanding what’s happening in the "post-truth" era. We can trace this back
to the 1950s when tobacco companies started spreading doubt about whether smoking causes
lung cancer because they had a reason to protect their business.

“Doubt Is Our Product”

Science denial often starts because of economic or political reasons. Usually, it begins with
people who have something to lose, and later, others join in spreading false information. In his
book Lies, Incorporated, Ari Rabin-Havt explains how corporate-funded lobbying and lying on
topics like climate change, guns, immigration, healthcare, national debt, voting reform,
abortion, and gay marriage have affected political views.

There are good resources that explain how science denial started, especially with smoking. In
Merchants of Doubt, Naomi Oreskes and Erik Conway show how scientists hired by tobacco
companies started using certain tactics to spread doubt about smoking. The money side of this
story is important for understanding how political opposition often starts with people who care
more about making money. This is also connected to how climate change denial was funded by
oil companies and how fake news grew from clickbait into real disinformation.

The story starts in 1953 at the Plaza Hotel in New York. Tobacco company leaders met to
discuss a scientific study that showed cigarette tar caused cancer in mice. John Hill, a famous
PR expert, suggested they stop fighting over which cigarettes were healthier. Instead, they
should work together to “fight the science” by paying for more “research.” They created the
Tobacco Industry Research Committee to convince the public that there was “no proof” that
smoking caused cancer and that earlier research was being “questioned by numerous scientists.”

And it worked. The Tobacco Industry Research Committee (TIRC) used the idea that science
had shown there was "no conclusive link" between smoking and cancer. This is because science
can never prove an absolute link between two things. The TIRC took out full-page ads in
newspapers across the U.S., reaching 43 million people. This created confusion and doubt about
a scientific issue that was almost settled. As Rabin-Havt explains:

"The Tobacco Industry Research Committee was created to cast doubt on the scientific
agreement that smoking causes cancer. They wanted to make the media believe there were two
sides to the story, and both should be treated equally. They also tried to stop politicians from
taking actions that would hurt the tobacco companies' profits."

This campaign continued for 40 years, even with more scientific research proving the dangers of
smoking. It wasn’t until 1998 that tobacco companies agreed to shut down the TIRC and release
thousands of internal documents proving they had known about the risks all along. They did this
as part of a $200 billion settlement that protected them from future lawsuits. This allowed them
to keep selling their products worldwide, assuming the public knew the risks. Why did they do
this? The profits they made over those 40 years likely far outweighed any costs. Once the
evidence became undeniable and lawsuits started, the companies probably figured their future
profits would still be much higher than the $200 billion settlement. Less than ten years later, the
tobacco companies were found guilty of fraud for hiding what they knew about smoking and
cancer since 1953.

However, the fight against science wasn't over, because now there was a plan that others could
follow if they wanted to challenge scientists. In Merchants of Doubt, Oreskes and Conway explain
this plan in more detail. They show that not only did other science deniers copy the "tobacco
strategy," but some of the same people were involved. Ever since a tobacco executive wrote a
famous memo in 1969 that said, "doubt is our product since it is the best means of competing
with the ‘body of fact’ that exists in the minds of the general public," it became clear what
needed to be done. The strategy was simple: find and fund your own experts, tell the media there
are two sides to the issue, use public relations and government lobbying, and create enough
public confusion to question any scientific result you want to challenge.

As Oreskes and Conway explain, this plan worked well in later arguments over things like
Reagan’s “Strategic Defense Initiative,” nuclear winter, acid rain, the ozone hole, and global
warming. Some of the money for these campaigns even came from the tobacco industry. By the
time climate change became a political issue in the early 2000s, the system of corporate-funded
science denial was running smoothly:
Paid experts made fake research that was turned into talking points and memes, which were
then shown on TV by hired people, spread on social media, and pushed into the public's mind
with paid ads.

Why look for real scientific disagreement when you can just make it up? Why bother with peer
review when you can spread your views by controlling the media or using public relations? And
why wait for government officials to figure out the “right” answer when you can buy their
support with industry money? This is a very cynical approach, but it’s just part of the path that
led to post-truth. After 2016, it seems almost old-fashioned to care about leaked memos,
shocking testimonies, and videos showing contradictions when the idea of truth itself is now
being questioned. How did they know they could push things this far? Because these tactics
worked in the next big fight: against global warming.

Climate Change and Beyond

Global warming is one of the most extreme examples of science denial today. As mentioned,
there are many books about how “skepticism” about climate change is actually a well-organized
and fake effort to challenge solid scientific evidence. Oreskes and Conway argue in Merchants of
Doubt that there’s a direct link between the “tobacco strategy” from the 1950s and the modern
debate over global warming. In this case, the money came from the fossil fuel industry, and the
think tank involved was the Heartland Institute. It’s disappointing to learn that some of the
earliest funding for Heartland came from tobacco giant Philip Morris. It’s also not surprising
that other major funders over the years have included ExxonMobil and the Koch brothers.

The Heartland Institute received over $7.3 million from ExxonMobil between 1998 and 2010 and
almost $14.4 million between 1986 and 2010 from foundations linked to Charles and David Koch,
whose company Koch Industries has big investments in oil and energy.

Since 2008, ExxonMobil has claimed it stopped funding organizations that deny climate change.
However, investigations have shown that while ExxonMobil was spending money to confuse
people about climate change, they were also planning to explore new drilling opportunities in
the Arctic once the ice cap melted. The Heartland Institute now threatens to sue anyone who
says they still get money from fossil fuel companies. While they’ve stopped revealing their
funding sources, what we do know is that Heartland proudly accepts the label of being “the
world’s most prominent think tank promoting skepticism about man-made climate change,” as
described by The Economist on their website. Leaked documents also show their plan, which the
New York Times explains as trying to “undermine the teaching of global warming in public
schools” and promote a curriculum that casts doubt on the scientific findings that fossil fuel
emissions harm the planet's future.

Heartland isn’t the only group that disagrees with climate change. In the past, there were other
industry-backed groups like the Edison Electric Group, the National Coal Association, and the
Western Fuels Association. These groups, along with public relations organizations like the
Climate Council and the Information Council on the Environment, were set up to create doubt
about global warming, just like the TIRC did for tobacco. The George C. Marshall Institute also
played a big role in promoting skepticism about climate change until it closed in 2015. They
even denied things like secondhand smoke, acid rain, and the ozone hole. Although they got
some money from fossil fuel companies, it seems like their main reason was their political belief
in rejecting "big-government" solutions. Some university scientists, who were treated like
celebrities at Heartland events, also questioned climate change. But to say there’s no “scientific
consensus” on climate change or that it’s not “settled science” is almost laughable.

In 2004, researchers reviewed 928 scientific papers on climate change and found that not a
single one questioned that climate change was caused by humans. A 2012 update showed only
0.17 percent of 13,950 papers disagreed. A 2013 survey of 4,000 peer-reviewed papers found that
97 percent agreed human activity was the cause of global warming. But according to the latest
poll, only 27 percent of Americans think “almost all climate scientists agree that human
behavior is mostly responsible for climate change.” Why is there so much public confusion
about whether climate change is real or whether scientists agree on it? Because this doubt has
been purposely created over the last 20 years by people with a financial interest in promoting it.

In 1998, the American Petroleum Institute (API) held meetings in Washington, D.C., to talk
about how the industry should respond to the Kyoto Protocol, a treaty meant to reduce global
greenhouse gas emissions. Representatives from big oil companies like Exxon, Chevron, and
Southern Company attended. One can’t help but wonder if the tobacco executives from 1953
would’ve been there too. These meetings were supposed to be secret, but they were leaked to the
public quickly, so we didn’t have to wait forty years to know what was discussed.

The after-action memo read in part:


Victory Will Be Achieved When
• Average citizens “understand” (recognize) uncertainties in climate science; recognition of
uncertainties become part of the “conventional wisdom”
• Media “understands” (recognizes) uncertainties in climate science
• Media coverage reflects balance on climate science and recognition of the validity of
viewpoints that challenge the current “conventional wisdom”
• Industry senior leadership understands uncertainties in climate science, making them
stronger ambassadors to those who shape climate policy
• Those promoting the Kyoto treaty on the basis of extent [sic] science appears [sic] to be out of
touch with reality

The similarity between the “tobacco strategy” and the API plan is too obvious to ignore. The
leaked memo shows that some of the key tactics for this plan were to “find, recruit, and train
five independent scientists to talk to the media,” “set up a Global Climate Data Center as a
non-profit educational group,” and “inform and educate Congress members.” Does all of this
sound familiar?

I think we can stop here. While the rest of the story is interesting, you can check out the sources
in this chapter to learn more. The main point is that even though the API’s plan was made
public just days after it was created, it still worked really well. The “facts” didn’t matter. The
media had already been trained to always present “both sides” of any “controversial” scientific
issue. As a result, the public stays confused. And now, our new president (along with other
Republicans like Sen. James Inhofe and Sen. Ted Cruz) keeps saying that climate change is a
hoax.

Implications for Post-Truth

The lesson from these cases of science denial should not be missed by today’s politicians. It
seems like you don’t even need to hide your plan anymore. In today’s world, where partisanship
is common, people often just “pick a side” instead of looking at the facts. Misinformation can be
spread openly, and fact-checking is often ignored. People use only the facts that support their
views and completely reject the ones that don’t. This is part of creating the new post-truth
world. It might be shocking for those who care about facts, but why would politicians hide their
plans when there’s no consequence for doing so? Donald Trump learned this when he spread the
“birther” conspiracy for years and still became president. When supporters care more about
which side you’re on than what the facts say, opinions start to matter more than facts.

The methods used in today’s post-truth world were learned from earlier campaigns that tried to
fight the scientific facts and succeeded. If people can deny the facts about climate change, why
not deny things like the murder rate? If the link between tobacco and cancer can be hidden by
years of misinformation, why not try the same with other issues? It’s the same strategy with the
same roots, but now it’s aimed at a bigger target—reality itself. In a world where beliefs are
more important than science, post-truth is the next step.
THE ROOTS OF COGNITIVE BIAS

People can foresee the future only when it coincides with


their own wishes, and the most grossly obvious facts can
be ignored when they are unwelcome.
—George Orwell

One of the deepest causes of post-truth thinking has been around for a long time: cognitive bias.
Psychologists have been studying for decades how we’re not as rational as we think we are.
Some of these studies show how we react when we face uncomfortable or surprising truths.

A key idea in psychology is that we try to avoid mental discomfort. It doesn’t feel good to think
badly about ourselves. Some psychologists call this "ego defense," based on Freudian theory, but
the idea is simple: we prefer to think of ourselves as smart, informed, and capable rather than
not. So, what happens when we’re faced with information that challenges something we believe
to be true? It causes psychological tension. How could we be intelligent and still believe
something false? Only the strongest people can handle too much self-criticism, like “I was so
wrong! The answer was right in front of me, and I didn’t see it. I must be an idiot.” To relieve
this tension, we often change one of our beliefs.

What matters is which belief we change. You’d think we would change the belief that’s been
proven wrong. If we’re shown that we’re wrong about something, it seems easiest to adjust our
belief to match the new evidence. But that’s not always what happens. There are many ways to
change a belief, some rational, and some not.

Three Classic Findings from Social Psychology

In 1957, Leon Festinger wrote a groundbreaking book called A Theory of Cognitive Dissonance,
where he explained that we try to keep our beliefs, attitudes, and behavior in balance. When
they’re not in harmony, it creates mental discomfort. To ease this discomfort, we usually try to
protect our sense of self-worth.

In one famous experiment, Festinger gave people a boring task to do. Some people were paid $1
for doing it, and others were paid $20. Afterward, they were asked to tell someone else that the
task was fun. Those who were paid only $1 said the task was more enjoyable than those who got
$20. Why? Because if they had done a pointless task for just $1, it would make them feel like
fools. So, they convinced themselves the task was actually fun to reduce their discomfort. The
people paid $20 didn’t feel the need to do that, because they knew they were just in it for the
money.

In another experiment, Festinger had people hold protest signs for causes they didn’t believe in.
After doing it, they started to believe the cause was a little more important than they had
originally thought.

But what happens when we’ve invested more than just time or effort, like when we’ve publicly
committed to something or even dedicated our life to it? Festinger studied this in a book called
The Doomsday Cult. He wrote about a group called “The Seekers,” who believed that their leader,
Dorothy Martin, could communicate with space aliens. They thought the aliens would come to
rescue them before the world ended on December 21, 1954. They sold everything they owned
and waited on a mountain for the aliens to arrive. But, of course, the aliens never came, and the
world didn’t end. The cognitive dissonance must have been huge. How did they handle it?
Dorothy Martin told them their faith and prayers had been so powerful that the aliens decided
not to come. In other words, the Seekers had saved the world!

It’s easy to laugh at the beliefs of people who fall for things like the “doomsday cult,” but studies
show that all of us experience cognitive dissonance to some degree. For example, if we join a
gym that’s too far away, we might tell ourselves that the workouts are so intense we only need to
go once a week. Or if we don’t get the grade we wanted in a tough class, we might convince
ourselves that we never really wanted to pursue that career anyway. However, the way we deal
with cognitive dissonance can be even stronger when we’re around people who believe the same
thing we do. If only one person believed in the “doomsday cult,” they might have felt alone or
even isolated. But when others share that mistaken belief, it becomes easier to convince
themselves they’re right.

In 1955, Solomon Asch conducted a famous experiment that showed how social pressure can
influence what we believe. He found that sometimes, we’ll ignore what we see with our own eyes
just to fit in with the group. In the experiment, Asch had a group of people (with a few who were
secretly part of the experiment) look at two cards. One card had a line, and the other had three
lines. The task was to pick the line that matched the one on the first card. At first, everyone gave
the right answers, but then the “confederates” (people who were in on the experiment) started
giving obviously wrong answers, saying that a shorter or longer line matched the original one.
The real subject, who was the only one not in on the experiment, would feel pressure when it
was their turn. Even though they could clearly see the right answer, they might feel the tension
of being the only one who disagreed with the group.

The results were surprising. About 37% of the subjects went along with the wrong answers, even
though they knew it was wrong. They chose to agree with the group rather than trust what they
knew was true. This showed how much we want to fit in and avoid standing out, even if it means
ignoring the evidence in front of us.

In 1960, Peter Cathcart Wason did an important experiment on how people make mistakes in
thinking. In his paper “On the Failure to Eliminate Hypotheses in a Conceptual Task,” he
introduced an idea that many people now recognize: confirmation bias. Wason’s experiment was
simple but clever. He gave 29 college students a task where they had to figure out a rule based on
a number pattern. For example, he gave them the numbers 2, 4, 6 and asked them to guess the
rule that created this sequence. The students could try different sets of numbers, and the
experimenter would tell them if their numbers followed the rule or not. The goal was to figure
out the rule with as few tries as possible.

The results were surprising. Only 6 out of the 29 students figured out the correct rule without
making any wrong guesses. Thirteen students proposed one wrong rule, and nine suggested two
or more wrong ones. One student didn’t even come up with a rule at all. What went wrong?
Wason found that the students who didn’t succeed were too focused on testing their own
guesses in ways that would confirm them. For example, when they saw the numbers 2, 4, and 6,
many of them guessed that the rule was "increase by two" and then tried numbers like 8, 10, and
12. They were told their guess was correct, so they kept trying more numbers that followed that
same pattern. But they never tested their rule with numbers that could disprove it. So when they
finally announced their rule, they were shocked to learn it was wrong because they hadn’t tested
it properly.

After that, thirteen people started testing their guesses and eventually figured out the right
answer: “any three numbers in ascending order.” Once they stopped only trying to confirm their
guess, they were open to the idea that there might be more than one way to make the original
number pattern. But there were still nine people who gave two or more wrong answers, even
though they had enough proof that their guess was wrong. They didn’t try numbers like 9, 7, 5.
Wason thinks maybe these people didn’t know how to test their rule themselves, or maybe they
knew how but found it easier or more comforting to get a clear answer from the experimenter.
In other words, their bias had such a strong grip on them that they couldn’t think outside of it.

These three experiments—(1) cognitive dissonance, (2) social conformity, and (3) confirmation
bias—are all important for understanding post-truth. This is when so many people form their
beliefs without using reason or good evidence, just based on their gut feelings or what their
peers believe. But post-truth didn’t start in the 1950s or 60s. It needed a mix of factors, like
strong political bias and social media “bubbles,” that really took off in the 2000s. Meanwhile,
more evidence of cognitive bias kept showing up.

Contemporary Work on Cognitive Bias

A lot has been written about the big changes that have happened recently in the field of
behavioral economics. In the late 1970s, some economists started to question the old
assumptions of "perfect rationality" and "perfect information" used in traditional economic
models. These models worked because they simplified things, but what if we could use a more
experimental approach?

In his book Misbehaving: The Making of a Behavioral Economist, Richard Thaler talks about his
early work with Daniel Kahneman and Amos Tversky, who were already major figures in
cognitive psychology. In their 1974 paper "Judgment Under Uncertainty", Kahneman and Tversky
introduced three key cognitive biases in decision-making. Over the next few years, their
research on choice, risk, and uncertainty showed even more issues with how people make
decisions. Their findings had such a big impact on many fields that in 2002, Kahneman won the
Nobel Prize in Economics. Sadly, Tversky had passed away in 1996, so he couldn’t win the prize
too. Kahneman even said he had never taken an economics course and that everything he knew
about economics came from working with Thaler.

Suddenly, people were paying a lot more attention to cognitive bias. Some of this was about
rediscovering old ideas in psychology, like "source amnesia" (where we remember something we
heard or read but can’t recall if the source was trustworthy) and the "repetition effect" (the idea
that we’re more likely to believe something if it’s repeated to us a lot). These concepts were
known by car salesmen and even by Nazi propaganda ministers like Goebbels. But new research
revealed even more cognitive biases, including two important ones related to Wason’s discovery
of confirmation bias. These are the “backfire effect” and the “Dunning-Kruger effect,” both of
which are part of motivated reasoning.

Motivated reasoning is the idea that what we want to be true can change how we see what
actually is true. We often think with our emotions, and this can affect our reasoning. This is why
we try to reduce feelings of discomfort in a way that makes us feel better, even if it’s not the
most logical. For example, Upton Sinclair said, "it is difficult to get a man to believe something
when his salary depends upon him not believing it." This means people might ignore facts if it
helps them avoid uncomfortable truths, especially if it affects their job or status.

Confirmation bias is closely related to motivated reasoning. When we really want to believe
something is true, we often look for evidence that supports it. This is why detectives might pick
a suspect and then focus on finding proof that supports their choice, instead of looking for
evidence that could prove they’re wrong. However, motivated reasoning and confirmation bias
are slightly different. Motivated reasoning is the mindset where we might change our beliefs to
fit what we want, even unconsciously. Confirmation bias is how we go about confirming those
beliefs by interpreting information in a way that agrees with what we already think.

Some of the experiments on motivated reasoning go way back to earlier findings in social
psychology. Recently, researchers have suggested that this is why sports fans from opposing
teams can watch the same video and see different things. It’s not just about being biased
because we have something at stake, like our team winning. Sure, this can happen sometimes,
where fans defend their team even if the evidence shows they got an advantage. For example, we
might see a play where the referees gave our team a better spot, but we won't question it
because it helped our team win. But often, fans really don’t see the situation the same way
others do. For example, in New England, fans will defend Tom Brady and the Patriots, even if
people accuse them of cheating. They genuinely believe the Patriots are not cheaters, and this
isn’t just about supporting the home team. It’s a strong psychological feeling that exists for all
sports fans, not just for Patriots fans.

David DeSteno, a psychologist at Northeastern University, studied how “team affiliation” affects
moral judgment. In one experiment, people who had just met were randomly split into two
teams with different colored wristbands. Then, they were told they had to choose between doing
an easy task for 10 minutes or a harder task for 45 minutes. If they didn’t want to choose, they
could flip a coin. But what they didn’t know was that they were being filmed. After the
experiment, 90% of the people said they had been fair, even though most had chosen the easier
task and didn’t flip the coin. What was even more interesting is that when another group of
people watched the video of those who lied and cheated, they condemned them—unless the
people on the video were wearing the same wristband color. This shows that we can excuse bad
behavior if we feel connected to someone based on something as simple as a wristband. Just
imagine how our reasoning would change if we were emotionally invested in a bigger cause.

Motivated reasoning has also been looked at by neuroscientists, who discovered that when our
thinking is influenced by emotions, a different part of our brain is activated. In one study, thirty
people who were strongly loyal to a political party were given a task that either hurt their
candidate or hurt the other candidate. The brain area that lit up during this task was different
from when they were asked to think about neutral topics. It’s not surprising that our biases
affect our brain, but this study was the first to show proof of how it works in the brain. Now,
with this in mind, let’s talk about two of the most interesting biases that explain how our beliefs
about politics can make us ignore facts and evidence.

The backfire effect: The “backfire effect” is a term based on research by Brendan Nyhan and
Jason Reifler. They found that when people were shown evidence that their political beliefs were
wrong, they didn’t accept it. Instead, they rejected the evidence and stuck even more firmly to
their mistaken beliefs. In some cases, the more proof they were shown, the stronger their wrong
beliefs became.

In the study, participants were given fake newspaper articles that seemed to back up some
common misconceptions. One article claimed that Iraq had weapons of mass destruction
(WMDs) before the Iraq War, and another claimed that President Bush had completely banned
stem cell research. Both of these claims were false. When they were given correct information,
like a quote from President Bush admitting there were no WMDs in Iraq, people's reactions
depended on their political views. Liberals and centrists (which was expected) accepted the
correct information. However, conservatives didn’t. In fact, some conservative participants
actually became more convinced that Iraq had WMDs after being told it didn’t.
This means the correction actually made things worse—the conservatives who were given the
correct information about Iraq not having WMDs were more likely to believe the false claim
than those who didn’t get any correction.

The researchers thought maybe conservatives just didn’t trust the media, but that didn’t explain
the results. Both groups—those who received the correction and those who didn’t—read the
same statement from President Bush. So, the backfire effect was caused by the correction itself.
If the issue was just distrust of the media, conservatives would have ignored the correction, but
instead, they believed the false claim even more.

In a second test, the researchers wanted to see if the same thing would happen with liberal
partisans. This time, they showed participants a fake news story claiming that Bush had
completely banned stem cell research (when, in reality, he had only limited federal funding for
stem cell lines created before August 2001 and did not put limits on privately funded research).
Afterward, participants were given the correct information. This time, the correction worked for
conservatives and moderates, but not for liberals. However, there was no backfire effect for the
liberals. The correct information didn’t change their mistaken belief, but it also didn’t make
them more sure of the false claim. The truth didn’t make things worse for them.

Some people have compared trying to change strongly held political beliefs with facts to "using
water to fight a grease fire." This is especially true for the most hardcore conservatives. But
Nyhan and Reifler, the researchers, pointed out that it’s not true that the most passionate people
on either side will never change their beliefs when faced with facts. They mentioned other
studies that suggest if partisans hear the same correct information over and over again, they
might eventually start believing it. In one such study, David Redlawsk and others looked at
whether "motivated reasoners" (people who are biased by their beliefs) ever change their minds.
Their conclusion agreed with Nyhan and Reifler: even the most strongly biased people can
change their beliefs after being exposed to the truth multiple times.

The Dunning-Kruger effect (also called the “too stupid to know they’re stupid” effect) is a bias
where people with low ability often don’t realize how bad they are at something. Almost
everyone experiences this effect to some degree, unless they’re an expert in everything. Earlier,
Kahneman and Tversky studied "overconfidence bias," where people are too confident in their
abilities even when they aren’t capable. Why do we think we can do things like drive a scooter in
Bermuda or fly a plane in bad weather, even when experts warn us not to? The Dunning-Kruger
effect builds on this idea, but it’s not just about underestimating the difficulty of a task, it’s
about underestimating our own abilities.

In their 1999 experiment, David Dunning and Justin Kruger found that people often think they
are much better at things than they actually are, especially when they don’t have much
experience. This is similar to the joke about "Lake Wobegon," where everyone thinks they are
above average. How many drivers or people in relationships would admit they’re below average?
Dunning and Kruger found that this overestimation happens in many areas, like intelligence,
humor, and even skills like logic or playing chess. The reason for this is that people who are
incompetent often can’t see their own mistakes. The same skills that make someone good at
something also help them understand if they are good at it or not. So, many people just keep
making mistakes without realizing it.

In one experiment, Dunning and Kruger had forty-five smart college students take a tough
20-question logic test from an LSAT prep guide. These students were also asked to rate how well
they did and how they thought they compared to others. The students were usually accurate
about how many questions they got right or wrong, but they wrongly thought they were "above
average" in their overall performance. The most surprising part was that the students who
scored the lowest on the test thought they did much better than they actually did. For example,
students who scored in the 12th percentile thought they were in the 68th percentile. This shows
that people who perform the worst are often the most confident about their abilities.

At this point, you might wonder why the students couldn't admit they were bad at something.
Maybe they just didn’t want to admit it? But when they were offered $100 to rate their skills
more accurately, they still couldn’t do it. This suggests that it’s not about lying to others—it’s
about lying to themselves. We love our own image so much that we can't see our flaws. It’s not
surprising then, that when we care a lot about our beliefs—like political ones—we’re often
reluctant to admit when we’re wrong. We might even trust our own “gut feeling” more than
experts' facts. Take, for example, when Senator James Inhofe brought a snowball into the Senate
in 2015 to “prove” global warming wasn’t real. Did he realize how wrong he was, mixing up
weather and climate? Probably not—he was "too stupid to know he was stupid." Similarly, when
Donald Trump claimed he knew more about ISIS than the generals, could he really believe that?
Many of us just don’t want to admit when we’re not experts on something, and instead, we keep
talking, ignoring the saying, “It’s better to keep silent and be thought a fool than to open your
mouth and prove it.”

The backfire effect and the “too stupid to know they’re stupid” effect are clearly connected to
the post-truth world we live in. These biases can mess with our thinking and even stop us from
realizing we’re being biased. When we care deeply about something, our thinking can be
affected, even if we don’t realize it. It’s interesting to think about why these biases exist in the
first place. Wouldn’t the truth help us survive better? Maybe, but the truth is that these biases
are part of how our brains work. We can’t get rid of them, but with practice and critical
thinking, we can try to reduce how much they influence us. Whether we’re liberal or
conservative, these biases are something we all inherit as humans.

As mentioned earlier, some cognitive biases might affect people differently depending on their
political views. For example, we've already seen that the backfire effect doesn't seem to affect
liberals as much. Other researchers have looked into how certain biases might be tied to
political beliefs. In an interesting study, anthropologist Daniel Fessler explored a bias called
"negativity bias," which tries to explain why conservatives might be more likely to believe scary
or threatening lies than liberals. In his research, Fessler showed people sixteen statements (most
of them false), but none were so crazy that they couldn’t seem true. Some statements were
harmless, like "exercising on an empty stomach burns more calories," while others were very
alarming, like "terrorist attacks in the United States have increased since September 11, 2001."
He then asked people to say if they were liberal or conservative and to rate whether the
statements were true. There was no difference when it came to the harmless statements, but
conservatives were more likely to believe the false statements, especially the scary ones.

Do people with different political views think differently about things like this? Studies have
shown that conservatives tend to have a larger fear response in the brain (the amygdala)
compared to liberals. Some people think this might be why so many fake news stories during the
2016 election were aimed at conservatives. If you're trying to spread a conspiracy theory, maybe
the right-wing audience is more likely to believe it. Fessler’s research found that the negativity
bias wasn’t huge, but it was noticeable: as you moved further right on the political spectrum,
people became 2% less skeptical of statements that warned of bad things happening than they
were about statements that promised good things. Even though this difference is small, it could
still have an effect on a large group of people. Fessler’s study is the first to look at how gullibility
might be linked to political identity.

Implications for Post-Truth

In the past, maybe our cognitive biases were reduced because we had to interact with others. It's
kind of ironic that today, with so much media around us, we might actually be more separated
from different opinions than our ancestors were when they had to live and work with people
who shared and exchanged information face-to-face. When we talk to others, we can't avoid
hearing different views, and research shows that this can actually help us think better.

In his book Infotopia, Cass Sunstein talks about how, when people interact, they can sometimes
come up with solutions that they wouldn't have figured out alone. He calls this the "whole is
more than the sum of its parts" effect or the "interactive group effect." In one study, J.C. Wason,
who came up with the term "confirmation bias," had a group of people try to solve a tough logic
puzzle. Most of them couldn't solve it alone, but when they worked together, something
interesting happened. People started questioning each other’s ideas and noticed flaws in their
thinking, which they didn’t see when thinking on their own. As a result, the group was able to
solve the puzzle, even when none of the members could do it by themselves. Sunstein points out
that groups perform better than individuals, especially when they interact and share their ideas.
When we let others challenge our thoughts, it gives us the best chance of finding the truth.
Critical thinking and being open to others' ideas is the best way to find the right answer.

But today, we can choose who we interact with. No matter our political views, we can easily live
in a "news silo," only listening to information that matches what we already believe. If we don’t
like someone's opinions, we can unfriend or block them. If we enjoy conspiracy theories, there’s
probably a place where we can find them. We can surround ourselves with people who think like
us. And once we do that, we might start changing our opinions to fit in with the group. Solomon
Asch’s research showed that this happens. For example, if we agree with most of our friends on
topics like immigration, gay marriage, and taxes but aren't so sure about gun control, we might
feel pressured to change our opinion to fit in. This isn't always a good thing. It's like the
negative side of the interactive group effect. We tend to feel more comfortable when our views
match those of our group. But what happens when the group is wrong? No matter if we’re
liberal or conservative, none of us has all the answers.
I'm not saying that we should think all political opinions are equally true, or that the truth
always lies somewhere in between different views. The middle ground between truth and
falsehood is still false. What I am saying is that all ideologies can sometimes get in the way of
discovering the truth. Maybe researchers are right that liberals tend to think more deeply than
conservatives, but that doesn't mean liberals should feel superior or think their political
instincts are always based on facts. From the work of Festinger, Asch, and others, we see that
following a political ideology too strictly can be dangerous. We all have a built-in bias to agree
with what others around us believe, even when the facts don’t support it. Deep down, we all
want to fit in, sometimes even more than we care about the truth. But if we really want to find
the truth, we have to fight against this. Why? Because these cognitive biases are the perfect
setup for a "post-truth" world.

When we already want to believe something, it doesn't take much to make us believe it,
especially if the people we care about believe it too. Our natural biases make it easy for people
with an agenda to manipulate us, especially if they can make us distrust all other sources of
information. Just like we can't escape our own biases, staying in a "news silo" doesn't protect us
from post-truth. The problem is that they are connected. We all rely on our sources of
information, and we're especially vulnerable when they tell us exactly what we want to hear.
THE DECLINE OF
TRADITIONAL MEDIA

Journalism is printing what someone else does not want


printed: everything else is public relations.
—George Orwell

Social media has played a big role in creating “information silos,” which let us stick to what we
already believe. But to understand this, we first need to look at how traditional media has
changed.

In the past, big newspapers like The New York Times, The Washington Post, and The Wall Street
Journal, along with TV networks like ABC, CBS, and NBC, were the main sources of news. Back
in 1950, people in the U.S. bought an average of 53.8 million newspapers every day, which was
more than the number of households. This means many homes were subscribing to more than
one paper! By 2010, that number dropped to 43.4 million, covering only 36.7% of households—a
loss of nearly 70%.

On TV, news was a big deal too. Anchormen like Walter Cronkite, who hosted CBS News from
1962 to 1981, were trusted figures. He was even called “the most trusted man in America.” Every
evening, people would tune in for half an hour of nationwide news.

People think of this time as the “golden age” of news. In the 1950s and 1960s, many smaller
newspapers went out of business because of competition from TV. This left most cities with just
one major newspaper, but it was usually better and more serious than before. On TV, the limited
news time—just 30 minutes a day—meant networks could focus on high-quality investigative
stories. News had its specific time slot, while TV stations made most of their money from
entertainment shows. Big news interruptions like “we interrupt this broadcast” were rare and
usually signaled something major like war or tragedy.

Even though there wasn’t much news on TV, this was actually good for the news teams because
they didn’t have to make money. Ted Koppel explains why:
Back then, TV network bosses were worried about losing their licenses if they didn’t act in the
“public interest,” as required by a rule from 1927. To prove they were following this rule,
networks used their news programs, which often lost money or barely broke even, as examples
of doing the right thing. News was like a “loss leader,” a way for NBC, CBS, and ABC to show
they were being responsible while making big profits from their entertainment shows.

This all started to change in 1968 when CBS launched the news show 60 Minutes. After a slow
start, it became the first news program to make money. This was a big deal. Even though it
didn’t change how TV news worked right away, it showed network bosses that news could be
“profitable.”

The "golden age" of TV news lasted through the 1970s, but things changed in 1979 during the
Iran hostage crisis. People suddenly wanted more news, but networks didn’t want to mess with
their highly profitable entertainment shows. NBC had The Tonight Show with Johnny Carson,
which was super popular. CBS was barely trying with late-night movies, and ABC was showing
reruns. Then ABC came up with an idea:

They moved daily updates about the Iran hostage crisis to late at night. This was partly because
ABC didn’t have any big shows competing with Carson and partly because news programs were
cheaper to make. ABC created a new show called Nightline, focused only on the hostage
situation. Every night, the show opened with “America Held Hostage” and the number of days
the crisis had lasted. The anchor, usually Ted Koppel, filled the time by interviewing experts,
journalists, and other people connected to the event.

The show was a hit and stayed on air even after the hostage crisis ended a year later. But the big
question remained: would people really want to watch more news than this?

In 1980, CNN made a bold move by launching 24-hour news. This was a gamble because they
needed enough content to fill the entire day. Ted Koppel's show on ABC could bring in experts
to talk about specific issues, like Iran, but how many experts were out there, and how many
topics could stay interesting for that long? And would viewers even want to watch the news like
a “buffet,” dipping in and out whenever they wanted, instead of waiting for their usual evening
broadcast or the next day’s newspaper?

The answer was a big yes. Even though some people criticized CNN for giving “watered-down”
coverage compared to traditional networks, the channel became a quick success. By 1983, CNN
was making its first profits, and through the 1980s, its popularity kept growing. Major events
like the Challenger explosion, the protests in Tiananmen Square, the fall of the Berlin Wall, and
the Gulf War brought more viewers to cable news.

Of course, complaints about bias were nothing new. For decades, newspapers, TV, and now
cable news had faced criticism. During the Vietnam War, President Lyndon Johnson wasn’t
happy with how networks covered him. Nixon’s vice president, Spiro Agnew, mocked
journalists, calling them “nattering nabobs of negativism.” Conservatives often said the news
had a “liberal bias,” but there wasn’t much of an alternative until the late 1980s.

Talk radio had been around for thirty years before Rush Limbaugh showed up, but he changed
the game. As Tom Nichols writes in The Death of Expertise, Limbaugh “[made] himself a source
of truth that went against the rest of American media.” Limbaugh believed that the media was
biased toward liberals, like Bill Clinton, and wanted to give a voice to people who felt ignored.
He ended up being super popular.

In just a few years, his show was on more than 600 radio stations across the U.S. Limbaugh
became famous for letting listeners call in and share their support. But the calls were carefully
chosen, and for a good reason—Limbaugh didn’t think he was great at debating. But debating
wasn’t really the point. The show’s goal was to make people feel part of a group that already
agreed with each other.

Most people didn’t listen to Limbaugh’s show to hear new “facts.” They tuned in because they
felt left out by the news on TV and in newspapers, which they thought had a liberal bias. Plus,
before call-in radio, media had always been one-sided—someone else decided what was true and
just told you. Limbaugh’s show was different. It gave people a way to share their opinions and
feel connected to a community. Even before the term “confirmation bias” was popular,
Limbaugh figured it out, and it made him unstoppable.

By the mid-1990s, more people started seeing how profitable partisan news could be. MSNBC
launched in July 1996, followed by Fox News in October 1996. Both networks positioned
themselves as rivals to CNN. Some people still don’t think MSNBC is biased, and in its early
years, it was less so, featuring conservative voices like Ann Coulter and Laura Ingraham. But
over time, MSNBC became known for its liberal perspective. Fox News, on the other hand, was
clear about its conservative stance from the start.
Fox News was created by Roger Ailes, a conservative media expert, and it took the idea of
partisan news to a whole new level. Ailes turned what Rush Limbaugh did on radio into a
full-blown TV network. Conservative commentator Charles Krauthammer even joked that Ailes
“discovered a niche audience: half the American people.”

Fox News has been very influential, but it’s also been criticized for its approach. For example,
after the tragic shooting of 20 kids in Newtown, Connecticut, Fox News executives told their
producers not to allow any talk about gun control on air. Fox is also known for pushing
conservative talking points. Studies back this up: in 2013, 69% of Fox News guests doubted
climate change, compared to 29% in the Los Angeles Times and 17% in the Washington Post.
Another study found that 68% of Fox News stories included personal opinions, compared to just
4% at CNN.

Because Fox doesn’t clearly separate facts from opinions, many of its hardcore viewers might
not realize they’re getting biased information. In fact, a 2011 study found that people who only
watched Fox News were less informed than people who didn’t watch any news at all.

Ted Koppel, a respected journalist, has recently spoken out strongly against “partisan
media”—news that openly supports one political side. He thinks it’s bad for democracy. What’s
interesting is that Koppel’s own show, Nightline, in the 1980s showed how interview-based news
could make money. But even though his show started that trend, Koppel feels things have gone
too far now.

He says, “The success of Fox News and MSNBC makes me sad. I get why they make so much money by
giving people opinions that match their own, but this is bad for the country. Both channels don’t even try
to be objective anymore. They show the world the way their political viewers want to see it, not how it
actually is. This isn’t journalism—it’s like a scam where people only hear what they want, and by the time
they realize it’s not true, it’s too late.”

Since Donald Trump became president, Koppel has focused more of his criticism on Fox News.

Do you find Koppel’s views about partisan news interesting or agree with his concerns? Let me
know if you’d like more examples of how media impacts public thinking!

KOPPEL I am cynical.
HANNITY Do you think we’re bad for America? You
think I’m bad for America?
KOPPEL Yeah … in the long haul I think you and all
these opinion shows—
HANNITY Really? That’s sad, Ted. That’s sad.
KOPPEL No, you know why? Because you’re very
good at what you do, and because you have attracted a
significantly more influential—
HANNITY You are selling the American people short.
KOPPEL No, let me finish the sentence before you do
that.
HANNITY I’m listening. With all due respect. Take
the floor.
KOPPEL You have attracted people who are determined that ideology is more important than
facts.

Some people dismiss everything Fox News does by calling it “fake news.” (Critics often joke it
should be called “Faux News.”) The issue of “fake news” is huge, and its connection to the
post-truth world is something we’ll talk about more later. But some say “fake news” didn’t start
with Fox—it started with comedy.

In 2014, Pew did a survey asking Americans which news source they trusted the most.
Conservatives picked Fox News (44%), while liberals chose network news (24%) and tied on
others like public TV, CNN, and Jon Stewart’s The Daily Show. But wait—The Daily Show is
comedy! Even Jon Stewart himself said he delivered “mock” news, meaning it was for laughs, not
facts. When real journalists worried that young people got their news from his show, Stewart
replied, “If you think it’s my job to ask tough news questions, we’re in bad shape.”

Not everyone lets Stewart, The Onion, or other satirists off the hook. Stephen Marche, in a Los
Angeles Times op-ed, said, “The left has a post-truth problem too: It’s called comedy.” He pointed
out that in 2009, Time magazine named Jon Stewart the most trusted news anchor, which he
thinks shows satire’s role in creating a post-truth culture.
But satire isn’t fake news. Satire exists to expose lies and nonsense, not to deceive. It uses humor
to show how ridiculous real-life events can be. Marche even admits, “Political satire is the opposite
of fake news. Satirists strip away journalism’s masks to show what they think is true. Fake news uses those
masks to spread lies.” Still, he argues that satire has made people see the news as a joke, which, no
matter the intention, has added to the post-truth problem.

This seems like a lot of blame to put on political satire. It reminds me of how Sean Hannity
defended Fox News by saying, “We have to give some credit to the American people that they are
somewhat intelligent and that they know the difference between an opinion show and a news show.”

So, who is responsible if people get the wrong idea? Is it the messenger’s fault if their followers
misunderstand? Or is it only the fault of those who are purposely trying to trick people with
lies?

But what if the way the story is presented actually makes people misunderstand? Can you just
say it’s the audience’s responsibility to figure it out, or does that avoid owning up to any bias?

The Problem of Media Bias

We’ve already seen how traditional media started to lose ground when opinion-based, partisan
news became popular. Now, let’s talk about whether traditional media also lost its quality and
commitment to good journalism.

When cable news shows became big in 1996, traditional media didn’t want to be seen as the
same. To stand out, network TV, CNN, and well-known newspapers worked harder to show they
were “objective.” Fox News, with its slogan “fair and balanced,” seemed to mock them. Fox wasn’t
claiming their coverage was neutral—they saw themselves as balancing out what they thought
was a left-leaning media.

Traditional media didn’t want to admit they were biased, so they tried to prove they were fair by
giving “both sides” of every issue. But instead of improving journalism, this made things worse.
Giving platforms to extreme or biased voices in the name of balance didn’t help people
understand the truth. It just made the news less accurate.
This focus on “equal time” hurt topics like science. When journalists gave space to both sides of
debates about climate change or vaccines, they created what’s called “false equivalence.” This
made it look like both sides were equally valid, even if one side didn’t have real evidence.

Science deniers took advantage of this. They pressured journalists by claiming the media was
“biased” if it didn’t cover their side. The media fell for it and started giving attention to debates
that weren’t real. The result? People got confused, and the media accidentally helped spread
misinformation.

In 1988, President George H. W. Bush promised to fight the “greenhouse effect” with the “White
House effect,” even before climate change became a major political issue. But over the next few
years, global warming turned into a deeply partisan topic. Oil companies started doing their
own “research” and wanted the media to cover it. At the same time, they were giving money to
politicians and pushing them to support their views. Now, we know this was all part of a
strategy to create “manufactured doubt” and distract from the fact that most climate scientists
already agreed that climate change was real and caused by human activity. But with so much
money involved, they didn’t want to let the scientists have the final say. As long as there were
“skeptics,” the media felt it was their job to report climate change as if it was still up for debate.

James Hansen was one of the first people to speak out about climate change. In 1988, he testified
before Congress, which led to two bills in the US Senate. He’s one of the leading experts on the
subject. But despite the clear science, Hansen faced a lot of frustration from the media’s need to
be “objective” on a topic that was already well-supported by facts. He shares a story where,
before appearing on a public television program, the producer told him that the show “must”
also include someone who would disagree with the idea of global warming. The producer said
this was common in commercial media and even public TV. Media outlets wanted “balance” to
keep advertisers and supporters happy. Hansen’s account shows that while many news articles
about climate change gave equal weight to both sides, scientific articles in trusted journals
didn’t question the consensus that human activities cause global warming. Because of this, even
when the science was clear, the media allowed “contrarians” to make it seem like there was still
doubt, leading the public to believe there was uncertainty when there wasn’t.

What happened to Hansen wasn’t unusual. The public started seeing TV "debates" where
scientists were on one side and "skeptics" on the other. The host would give each side the same
amount of time to speak, and then call the issue "controversial." For a while, many TV news
shows seemed to copy Fox News' motto, "we report, you decide."

Naturally, this confused people. Was there a real scientific debate about climate change or not?
If there wasn’t, why were the news shows treating it like there was? The media might have told
themselves that it wasn’t their job to pick a side on a "partisan" issue, but by not doing any
research to find out that scientists weren’t divided, they were making a big mistake. The goal of
being "objective" isn’t to give equal time to both truth and lies; it’s to help the truth come out.
Since scientists had already agreed on climate change, the only "controversy" was the political
one created by oil companies and people who spread lies. The result was that, even though there
wasn’t a real scientific debate—just like there hadn’t been a debate about the link between
smoking and cancer years earlier—the public thought there was.

And who could blame them? They saw it on the news! By this point, the media had stopped
focusing on "telling the truth" and instead were trying to prove they weren’t biased, which
played right into the hands of those trying to confuse the public with fake skepticism. Why did
the media do this? Part of it might have been lazy reporting. As one commentator said:

"Objectivity excuses lazy reporting. If you’re on a deadline and all you have is 'both sides of the
story,' that’s often good enough. It’s not that these stories about a debate have no value, but too
often, in our rush to cover the latest thing, we fail to dig deeper and understand what’s true and
what’s false."

This can lead to serious problems. When you give equal time to lies as you do to the truth, it lets
people believe false ideas. Political groups were using the media, and the media was misleading
the public. But there's another reason for this: money. In a world where media outlets are
competing for attention, they may have focused on finding a "story," and sometimes that means
creating drama. Donald Trump said it best in his book The Art of the Deal—the media loves
controversy more than the truth. But this wasn’t just an isolated case; it happened again with
the fake link between vaccines and autism, based on bad research by Dr. Andrew Wakefield in
1998.

This time, the drama was even bigger. Sick kids, their sad parents, celebrities getting involved,
maybe even a government cover-up! And once again, the media failed to report the truth. The
truth was that Wakefield’s research was probably fake. He had a huge conflict of interest, his
research couldn’t be repeated, and his medical license had been taken away. This was all known
by 2004, but by then, the vaccine-autism story had already taken off. Even when it was officially
revealed that Wakefield’s research was a fraud, the damage was already done. The constant TV
debates made people scared, and vaccination rates dropped. A disease that was almost gone,
measles, came back and spread to 84 people across 14 states.

This is the problem: “bias in information” happens when media coverage goes against what the
scientific community agrees on. How did this happen? How did trying to be "fair" and
"balanced" actually lead the media away from the truth? The issue is that the media felt pressure
to give equal time to both sides, even when one side was lying. This created a "denial discourse,"
where fringe opinions got too much attention. As the saying goes, “If you add just one rotten
ingredient to a dish, the whole dish will taste rotten.”

Balance sounds good because it aims to be neutral, showing both sides of a story equally. But
there’s a problem: balance can sometimes replace checking the facts. Journalists, even ones who
know science, often don’t have enough time or expertise to verify claims. This leaves the door
open for people with an agenda to manipulate how a topic is reported.

Did this happen with the global warming issue? It’s not surprising that it did. Remember the
1998 meeting held by the American Petroleum Institute (API) and the strategy memo that got
leaked? The oil companies hired some "independent scientists," and it worked. Boykoff and
Boykoff talk about how the API’s media strategy helped create bias in the way climate change
was covered:

In most of the big US newspapers, the coverage was "balanced." This meant they gave "roughly
equal attention" to the idea that humans were causing global warming, and to the idea that
natural changes were the only reason for the rise in the Earth's temperature.

The print journalists were tricked, just like the TV journalists.

Implications for Post-Truth

Traditional journalists today are in a tough spot. As more people turn to opinion-based and
sometimes unedited content, these journalists are being criticized for being biased, even when
they try to tell the truth. If they call out the president for lying (even when he’s lying), they get
attacked. If they don’t give "skeptics" a chance in scientific debates, they’re accused of only
telling one side. Is it any wonder that some journalists wish they could go back to the days when
their authority and journalistic values were respected?

Instead, they face constant criticism. Donald Trump has called any media report he disagrees
with "fake news" and says the press is "among the most dishonest people on earth." And it’s
working. A recent Gallup poll showed that Americans’ trust in the media has dropped to just
32%, down from 72% in 1976 after major events like Watergate and the Vietnam War.

This is all part of the shift to a "post-truth" world. Since news audiences are so divided, the line
between traditional and alternative media is fading. Many people now prefer news from sources
that may not tell the truth, but that fit their views. In fact, some can’t even tell which sources are
biased anymore. If you believe all media is biased, it might not matter which biased source you
choose. Those who try to measure media reliability after the election have even faced threats.

Social media has made this situation worse. With facts and opinions mixed together online, it’s
hard to know what to believe. There are no filters or checks, so people are flooded with
partisanship. With the mainstream media’s reputation at a low point, those pushing propaganda
don’t need traditional media anymore—they have their own platforms. And if that doesn’t work,
there’s always Twitter. If the media is the enemy, Trump can go directly to the people. Why
bother with fact-checking when the president can speak directly to the public? The challenge to
reality is now complete.
THE RISE OF SOCIAL MEDIA AND

THE PROBLEM OF FAKE NEWS

Don’t believe everything you read on the Internet.


—Thomas Jefferson

The decline of traditional media, especially newspapers, isn’t surprising and has a lot to do with
the rise of the Internet. The highest point for newspaper sales in the U.S. was in 1984. After that,
sales started to drop because of competition from cable TV. But things really got worse in the
1990s when the Internet became available to the public. By 2008, during the financial crisis,
many newspapers fell into a bad cycle: their earnings dropped, they cut staff, their content
became smaller, and more readers stopped buying.

Experts have warned that by offering less, newspapers are encouraging readers to leave. Many
papers have fewer pages and fewer articles now, and newsroom teams have shrunk. Peter
Appert, an analyst at Goldman Sachs, said, “It seems obvious that cutting costs this much is
affecting the quality of the news. I can’t prove that it’s why circulation is falling, but if I ran a
newspaper, it would worry me.”

The 2016 Pew Research Center report painted a grim picture:

● Weekday newspaper sales dropped 7%, and Sunday sales fell 4%, the worst declines since
2010.
● Ad revenue also fell nearly 8%, the biggest drop since 2009.
● Newsroom jobs shrank 10% in 2014, more than in any year since 2009.
● Over the last 20 years, the newspaper workforce has lost about 20,000 jobs—a 39%
decrease.

At the same time, TV news was going through its own problems. Starting in the 1990s, TV
networks moved away from fact-based investigative reporting and replaced it with
opinion-focused shows. They also shut down many of their foreign news offices to focus on
cheaper, local coverage. By 2015, that seemed like a smart financial move because the biggest
news stories were happening right in the U.S.
The 2016 presidential election was a huge win for TV networks. Their audience numbers went
through the roof, and they made tons of money. CNN made $1 billion in profit that year, the best
in its history, and Fox News, already the most profitable cable network, was expected to make
$1.67 billion. People couldn’t stop watching the election coverage, day and night. Compared to
the previous year, daytime viewership went up by 60% on Fox, 75% on CNN, and a shocking 83%
on MSNBC.

How did they pull this off? By giving people what they wanted: endless coverage of Donald
Trump. Fox News openly supported Trump, and many said their coverage felt like propaganda
for the Republican party. CNN, on the other hand, aired Trump’s rallies live, without any checks
or commentary. Some reports say the networks gave Trump around $5 billion worth of free
media coverage during the election.

It wasn’t just good for Trump—it worked out great for the networks, too. Trump brought them
huge ratings, and they cashed in. But many people think the networks ignored their job of
fact-checking Trump’s lies. Instead, they used the same "false equivalence" approach they had
used on other issues, like climate change. They included both Trump and Clinton supporters on
discussion panels, even when the facts weren’t equal. Some even blame CNN for helping Trump
win the presidency.

CNN’s president, Jeff Zucker, admitted they made mistakes. He said, “If we messed up, it’s
probably because we aired too many of his campaign rallies in the early months and let them
run.” During those rallies, Trump constantly insulted the media. He made reporters stay in
fenced-off areas and wouldn’t let them show crowd shots during his speeches. The news
networks agreed to these rules just to keep airing Trump and get the ratings.

With newspapers struggling to survive and TV networks chasing profits over truth, people
turned to social media to share their frustrations and try to find reliable information.

When Facebook started in 2004, it was just a social site where people could connect with
friends, make new ones, and share their thoughts. Over time, it became a big place for sharing
news. This wasn’t just because people posted stories on their pages, but also because Facebook
added a “trending stories” section, showing popular news picked based on what people “liked.”
This meant the news you saw was what Facebook thought you’d want to see. Other platforms
joined in too—YouTube started in 2005, and Twitter followed in 2006.
Social media made it hard to tell the difference between news and opinion. People started
sharing stories from blogs, alternative sites, and random places without checking if they were
true. By the 2016 presidential election, social media was filled with super-biased posts.
Technology made it easy for people to click on stories that matched what they already believed,
whether those stories were fact-checked or not. Instead of paying for newspapers, people just
read free stories their friends shared. This made it tough for traditional news outlets to compete.

A Pew poll showed that 62% of American adults get their news from social media, and most of
that comes from Facebook. That’s 44% of all US adults getting their news from Facebook alone.
The problem? With no proper fact-checking or editing, it’s hard to know which stories are real
and reliable. Some people don’t even care—they just read stuff that fits their beliefs.

This creates “news silos.” People stick to stories they agree with and block out the rest, even
unfriending people with different views. What news you see depends on what your friends share
and what Facebook’s algorithm thinks you’ll “like.” It’s ironic because the Internet can give you
accurate info instantly, but for many, it’s just turned into an echo chamber. This is dangerous.
Without editors checking what counts as “news,” how can we know when we’re being misled?

When I was about seven, I went to the supermarket with my mom. While we were waiting in
line, I saw a newspaper with a crazy headline. I pointed it out to my mom, and she said, “Oh,
that’s trash. That’s the National Enquirer. They print all kinds of lies. You can’t believe that.” I
asked her how she knew it wasn’t true without reading it and how a newspaper could print
something fake. We had a serious talk about it.

Now imagine this: What if you took a copy of the National Enquirer and a trusted paper like the
New York Times, cut out their news stories, and mixed them up on a single page? If you scanned
it, changed the fonts, and removed the logos, how would you know which stories were true?
That’s exactly what happens on news sites like Facebook, Google, and Yahoo. Sure, you could
look at the source, but do you know which sources to trust? You might believe something if it
says The New York Times, but what about if it says InfoWars, Newsmax, or even ABCNews.com.co?

There are so many “news” sources now that it’s hard to tell which ones are reliable unless you
really check carefully. Some even try to trick you by looking official. For example,
ABCNews.com.co isn’t part of ABC News, but it seems like it could be. The problem is that real,
fact-checked news is often shown right next to lies and propaganda. This makes it almost
impossible to know what’s true anymore. It’s a perfect setup for people with hidden agendas to
take advantage of us and our natural biases.

The History of Fake News

Fake news didn’t start with the 2016 presidential election or even with social media. Some
people think fake news has existed ever since we started having “news” at all.

It became a big deal when news started spreading widely after Johannes Gutenberg invented the
printing press in 1439. Back then, “real” news was hard to confirm. There were many sources of
information—official updates from governments or churches, and stories from sailors or
merchants—but there were no rules about being honest or fair. Readers had to pay close
attention to figure out the truth. Fake news has actually been around longer than “verified” or
objective news, which only became common a little over 100 years ago.

Fake news kept popping up over the years, even during important times like the scientific
revolution and the Enlightenment. Right before the French Revolution, pamphlets were spread
around Paris claiming the government was almost broke. But these pamphlets came from rival
political groups, each using different numbers and blaming different people. Eventually, enough
facts came out for people to understand what was really happening, but, just like now, “readers
had to be both skeptical and skilled to figure out the truth.”

During the American Revolution, both the British and the Americans created fake news. For
example, Benjamin Franklin made up a story saying some Native Americans who were
“scalping” people were working with King George. It was pure fiction, but it was used as
propaganda.

Fake news kept going strong in America and other places even after the American Revolution.
But eventually, the idea of “objectivity” in news reporting started to take shape. Michael
Schudson explains this in his book Discovering the News: A Social History of American Newspapers:

“Before the 1830s, objectivity wasn’t even a thing. American newspapers were supposed to share
opinions and take sides, not be neutral. They didn’t even focus on reporting the daily ‘news’ like
we think of it now—the whole idea of ‘news’ was created during the Jacksonian era.”
So, what changed during the Jackson era to make people care about nonpartisan, fact-based
news?

It started with the invention of the telegraph in the 1840s. To use the telegraph for faster news
sharing, a group of New York newspapers created the Associated Press (AP) in 1848. Since AP
stories were shared with newspapers that had all kinds of political views, they had to keep their
reporting “objective” enough so everyone would use it. By the late 1800s, AP reports became
much more neutral and factual compared to how single newspapers usually wrote their stories.
People argue that the Associated Press’s style of reporting set the standard for journalism.

But that didn’t mean fake news disappeared or that all newspapers became “objective.” The
Associated Press gave newspapers more neutral content, but those newspapers still did
whatever they wanted with it.

“Objective reporting didn’t become the main rule in journalism even in the late 1800s, when the
Associated Press was growing. At the time, newspapers cared more about telling a dramatic
story than just sticking to the facts. Sensationalism—the art of making stories exciting—was the
biggest trend in newspapers back then.”

Back in the 1890s, newspapers were in the middle of what’s called the “yellow journalism” era.
Big media bosses like William Randolph Hearst and Joseph Pulitzer were fighting to sell more
papers. “Yellow journalism” meant newspapers focused on shocking and dramatic
stories—more about grabbing attention than telling the truth.

How extreme was it? Well, it even helped start a war. “The Spanish-American War might not
have happened if Hearst hadn’t started a fierce battle for newspaper readers in New York.”
Worse, this wasn’t an accident—it was on purpose to sell more papers:

In the 1890s, Hearst’s Morning Journal used wild exaggerations to stir up the Spanish-American
War. When a reporter in Havana said there was no chance of war, Hearst famously replied, “You
furnish the pictures, I’ll furnish the war.” He even printed fake drawings showing Cuban
officials searching American women, which made people angry and ready for war.

Yellow journalism didn’t stop there. Another big moment came in 1898 when the U.S. Navy’s
USS Maine exploded in Havana, killing over 250 Americans. No one ever figured out what
caused it, but the yellow press quickly blamed Spain. They came up with the slogan “Remember
the Maine” to fire people up and push for war.

But during all this chaos, the idea of “objective” news reporting started to grow:

In 1896, during the wild days of yellow journalism, the New York Times began to stand out by
focusing on facts and “information” instead of dramatic stories. While the Associated Press
aimed to stay factual to appeal to all political sides, the Times targeted wealthier, more educated
readers by delivering clear, accurate information.

Over time, the push for objectivity became stronger, shaping the way news was reported for
much of the 20th century. However, things changed again with the rise of the internet:

The arrival of digital news brought back the wild side of yellow journalism. Fake news made a
comeback, and today, it’s once again a big problem, shaking up how we trust and consume news.

Let’s think about this for a second. Isn’t it kind of amazing that we expect news to be “objective”
and not take sides? If you look back in history, the rich and powerful have always tried to
control what regular people believe. Before books and newspapers were cheap and available, it
wasn’t surprising that a king or ruler could pretty much “create his own reality” and make
people believe what he wanted.

That’s why the idea of “free media” was so revolutionary—and pretty new. Even if some of the
news was fake, it was still a big deal to have different opinions and information out there. But
why do we now think this should cost us nothing? Why do we act like it’s not our job to dig a
little deeper and figure out what’s true?

For most of history, news was biased. Pamphlets were full of politics. Newspapers were owned
by people with money and agendas. Has that really changed? Yet, we expect news to be
completely fair and get upset when it’s not. But are we actually doing anything to support
fact-based, neutral news? Are we paying for it, or were we ignoring it until something like the
election reminded us how important it is?

It’s easy to blame technology and say, “Things are different now.” But technology has always
shaped how fake news spreads. The printing press, the telegraph, and now the Internet have all
played a role. The Internet, especially, has made news so easy and cheap to access that we’ve
gotten “lazy.” We feel like we deserve good, truthful news, but we’re not using our critical
thinking skills anymore. And isn’t this part of why fake news has made such a big comeback?

Fake News Today

We’ve talked a lot about the history of fake news, but what exactly is it? “Fake news” isn’t just
news that’s wrong—it’s news that’s “deliberately” made up to fool people. It’s created on
purpose.

At the start of the 2016 election, fake news was mostly about “clickbait.” The goal was to get
people to click on shocking headlines so the creators could make money through ads. It’s like
when the National Enquirer puts headlines like “Hillary: Six Months to Live!” to make you pick
up the magazine at the store. But soon, it got worse. The people making fake news noticed that
positive stories about Trump and negative ones about Hillary got the most clicks. So they started
focusing on these stories. Over time, fake news went from being about making money to being
about “political manipulation.”

A lot of this fake news came from places like the Balkans and Eastern Europe. The New York
Times ran a story in 2016 about Beqa Latsabidze, a student from Tbilisi, Georgia. He was trying
to make money using Google ads. At first, he posted positive stories about Hillary Clinton, but
they didn’t do well. Then he started writing positive stories about Trump, and they blew up. “It’s
all Trump,” he said. “People go nuts for it.” He realized that if he wrote anything negative about
Trump, he’d lose his audience. So, he stuck to bashing Hillary and supporting Trump—and
made thousands of dollars.

One of his most popular stories was completely fake. It claimed that Mexico would close its
border to Americans if Trump became president. When asked about it, Latsabidze said he didn’t
have any political agenda—he was just “following the money.” He found it hard to believe
anyone took his stories seriously. “Nobody really believes that Mexico is going to close the
border,” he said. In fact, he didn’t even think what he was doing was fake news. He called it
“satire.”

All 17 American intelligence agencies agreed that Russia was involved in hacking the US
election. So, when people claim they were innocent, it’s hard to believe them. The Kremlin
hacked the Democratic National Committee’s computers to find information that could
influence the election. A lot of the fake news supporting Trump came from Russia and nearby
countries. Could some of the “Hillary-bashing” fake news have been part of a political plan?
Even if the hackers were only after money, whose goals were they helping? In one small town in
Macedonia, over a hundred websites were spreading pro-Trump stories. It’s hard to believe this
wasn’t part of a bigger, coordinated effort.

Fake news creators weren’t just outside the US—they were inside too. A couple of months after
the article on fake news from the Balkans, the New York Times wrote about Cameron Harris, a
recent college graduate and Trump supporter. He created a viral fake news story on his website,
“Christian Times.” His headline claimed, “Tens of Thousands of Fraudulent Clinton Votes
Found in Ohio Warehouse.” Harris made it all up—he invented a janitor, used a photo of British
ballot boxes he found online, and wrote the story at his kitchen table. Six million people shared
it! Harris said his goal was just to make money, and he earned $5,000 in a few days. But he also
said the experience taught him something. “It shocked me,” he admitted, “how easily people
believed it. It was like a sociology experiment.” When people found out he created the story, he
lost his job and apologized, though he tried to justify it by saying fake news existed “on both
sides.”

We don’t yet know how coordinated Russia’s hacking efforts were because the FBI and Congress
are still investigating. But one thing is clear: whether the creators of fake news had political
motives or not, their stories had real political effects. For example, how many people believed
Harris’s fake story about Hillary “stuffing ballots” and shared it with undecided voters? What
about stories from right-wing outlets, like ones speculating Hillary had a brain tumor? Even if
they weren’t outright lies, weren’t they still meant to mislead?

Fake news can also come from carelessness or guessing. After the election, Eric Tucker, a
businessman, tweeted a picture of buses in Austin, Texas. He claimed they were being used to
bring in paid protestors against Trump. Tucker didn’t make any money from this, but his tweet
went viral, getting shared 16,000 times on Twitter and over 350,000 times on Facebook. Even
Trump himself noticed and tweeted about “professional protestors” being incited by the media.
This shows how fake news can spread quickly and cause harm, even if it wasn’t created for
profit.

As we saw with science denial, there are people who lie and people who are lied to, and both are
harmful to the truth. Climate change denial started because of oil companies wanting to protect
their profits, but it soon turned into a political belief with serious consequences. Similarly, fake
news about the 2016 election began as clickbait, but it quickly became a tool for political harm.
Fake news is when people deliberately spread lies to get others to believe them, whether it’s for
money or power. In both cases, the results can be very dangerous.

A month after the election, a man walked into a pizzeria in Washington, DC, and shot a gun,
claiming he was investigating a fake news story he read. The story said that Bill and Hillary
Clinton were running a child sex slave ring out of the pizzeria. This fake news, known as
#pizzagate, had spread across social media and certain websites. Luckily, no one was hurt, but it
shows how fake news can lead to serious consequences.

Buzzfeed reported that in the three months before the 2016 election, the top 20 fake news stories
on Facebook were shared more than the top 20 real news stories. Could fake news have helped
Trump win the election? Or worse, could it have led to something as dangerous as nuclear war?

A few weeks after #pizzagate, the Pakistani defense minister threatened to use nuclear weapons
against Israel because he read a fake news story claiming that Israel would destroy Pakistan if
they sent troops to Syria. If fake news could start the Spanish-American War, could it cause
another war today?

Fake news is everywhere. If you don’t believe it, search “did the Holocaust happen?” on Google.
In December 2016, the top result would lead you to a neo-Nazi website. The day after the
election, a fake story claiming that Trump had won the popular vote was the top result when you
searched “final election result.”

Down the Rabbit Hole

During his first year as president, Trump used the idea of "fake news" to his advantage by
calling anything he didn’t agree with fake. In January 2017, at a press conference before his
inauguration, he refused to answer a question from a CNN reporter, saying that CNN was
spreading fake news. What happened? CNN had reported that both Trump and Obama had been
shown an unconfirmed intelligence report with some claims about Trump. CNN didn’t say the
claims were true, they just reported that Trump and Obama had been briefed on it. But Trump
dismissed it all as “fake news.” Over the next months, Trump also called “fake news” stories
about his White House staff arguing, his poll numbers dropping, and other verified facts. Isn’t it
ironic that calling something “fake news” could itself spread fake news?
Remember, fake news isn’t just news that is wrong or embarrassing. For news to be considered
fake, it has to be intentionally made up. There would need to be a clear reason behind it, like
trying to push a certain idea. Without proof of a conspiracy in the media, it seems silly to claim
that all news is fake.

Fake news is really about lying. It's made to get people to believe something, even when the
person knows it’s not true. In that sense, fake news is very similar to “propaganda.”

In his book How Propaganda Works, Jason Stanley disagrees with the idea that propaganda is just
about lying or tricking people. He says propaganda isn’t always about convincing people of
things that aren’t true, and it doesn’t always come from a place of insincerity. Propaganda,
Stanley argues, is about using and strengthening a flawed belief system. If he's right, the
relationship between fake news and propaganda is more complicated and dangerous than we
think. According to Stanley, the goal of propaganda isn’t just to deceive—it’s to control people.

In a recent radio interview on NPR, Stanley explained that the goal of propaganda is to get
people to “pick a team” and build loyalty. It's not about sharing facts, but about getting people
to take sides. Trump, by using some of the old tricks of propaganda (like making people
emotional, attacking critics, blaming others, creating divisions, and making things up), could be
pushing us toward more authoritarian politics. The aim of propaganda isn’t to prove you're
right, but to show that you control the truth. When a leader has enough power, they can ignore
reality itself. This might sound unbelievable, but it has happened before, even in American
politics. For example, Karl Rove once told critics of George W. Bush that they were part of the
“reality-based community,” and then said, “we’re an empire now, and when we act, we create our
own reality.”

Some ideas are so scary that we hope they're not true. But Stanley says that ignoring reality can
actually be popular. The first step to gaining political control is to lie and get away with it.
Stanley quotes Hannah Arendt, saying “what convinces masses are not facts, and not even
invented facts, but rather, open defiance.” Arendt also said that the ideal subject of totalitarian
rule isn’t someone who fully believes in a certain ideology, but someone who can no longer tell
the difference between fact and fiction.

This is a pretty extreme idea, but even if you only see fake news as a form of lying for money
(which still had political consequences), it’s important to remember history. We’ve seen before
how controlling information can be a serious political threat. Joseph Goebbels, Hitler’s
propaganda minister, was great at taking advantage of things like “source amnesia” (forgetting
where you heard something) and “the repetition effect” (repeating something enough times until
people believe it). Goebbels said that “propaganda works best when those who are being
manipulated are confident they are acting on their own free will.” Deception, manipulation, and
exploitation have long been used to create an authoritarian political system.

Trump’s strategy is perhaps different from this, yet


not unrecognizable:
1. Raise questions about some outlandish matter (“people are talking,” “I’m just repeating what I
read in the newspaper”), for instance that Obama was not born in the United States or that
Obama had Trump wiretapped.
2. Provide no evidence (because there isn’t any) beyond one’s own conviction.
3. Suggest that the press cannot be trusted because they are biased.
4. This will lead some people to doubt whether what they are hearing from the press is accurate
(or at least to conclude that the issue is “controversial”).
5. In the face of such uncertainty, people will be more prone to hunker down in their ideology
and indulge in confirmation bias by choosing to believe only what fits with their preconceived
notions.
6. This is a ripe environment for the proliferation of fake news, which will reinforce items 1
through 5.
7. Thus, people will believe what you say just because you said it. Belief can be tribal. It doesn’t
take much to get people to believe what they want to believe, if it is being said by someone
whom they see as an ally and they are not being challenged by reliable counterevidence (and
sometimes even when they are).

Why bother with censorship when the truth can be hidden under a pile of lies? Isn’t that what
the problem with "post-truth" is all about: that feelings matter more than the truth? We can't
even tell what's true anymore.

Timothy Snyder, a historian who writes about the Holocaust, has a book called On Tyranny. He
wrote it to warn us about where we're headed, where things like fake news and “alternative
facts” can easily lead us toward authoritarian rule. In a radio interview, Snyder said, “post-truth
is pre-fascism.” This might sound like a big jump from something like fake news, but with social
media spreading false information faster than ever, shouldn’t we at least be aware that this could
happen?

The question still remains: Is fake news just propaganda? If fake news is created just to make
money, it’s more like fraud. But if it’s meant to trick people into believing lies, it might not be
full-blown propaganda yet. Stanley argues that the point of propaganda is not just to fool you,
but to take political control. Deception is one way to do that, but it’s not the only way. True
dictators don’t need your approval. If post-truth really is a step toward fascism, maybe fake news
is just the first step, getting us used to confusion before something worse happens. Fake news
makes us unsure of what to trust. Once we don't know what's true, that confusion can be used
against us. Real propaganda might come later, when it won’t matter if we believe it, because we
already know who’s in control.

Fighting Back

We've all seen charts showing which media outlets are biased or reliable. But you probably know
what’s coming next. Conservative talk show host Alex Jones’s website, Infowars, responded by
creating its own chart. Just like how there are “fact-checker” websites like Snopes, PolitiFact,
FactCheck, and the Washington Post, some people claim these are biased too. In fact, there are
even accusations of left-leaning fake news.

So, what can we do about it? First, remember that it’s exactly what those spreading lies want—to
make us think everything is equally unreliable. When we say “they’re all the same,” we’re falling
into their trap, making it easier for them to convince us that nothing is true. With that in mind,
here are some concrete steps we can take.

First, we need to understand the problem and how it’s being used. Facebook and Google now
make up 85% of all new online ad revenue in the U.S. These companies are huge. Some people
say they should stop fake news from spreading. After the election, both Facebook and Google
said they would take steps to stop fake news. Google even said they’d ban websites that spread
fake news from using their ads. But there’s a problem: How do you know for sure which
websites are fake and how do you handle complaints from people? Facebook also said they
wouldn’t allow ads from websites with misleading or illegal content. But most fake news on
Facebook comes from posts by friends, not ads, so it’s unclear if Facebook can or wants to stop
it. There were complaints before about Facebook messing with their trending news by using
editors instead of an algorithm, so they stopped doing that after conservatives complained.
Some people suggest that tech companies could create a system to rate and warn about fake
news, like how they remove offensive content. But, even then, people might accuse them of
being biased in what they choose to block.

Are there better ways? Brooke Binkowski, from the fact-checking website Snopes, says "the
answer is not just getting rid of fake news, but flooding it with real news." If more real news is
available, people will keep searching and find reliable, in-depth information. While this makes
sense, it might not change the minds of people who only want stories that support their beliefs.
But flooding the internet with good news worked to spread fake news, so it could work the other
way too. Maybe the solution is to support news organizations that focus on facts and evidence.
We might even consider buying subscriptions to reliable newspapers like The New York Times
or The Washington Post instead of relying on free articles.

Next, we need to promote critical thinking. Hopefully, schools and universities are already
working on this. Daniel J. Levitin’s book Weaponized Lies teaches how to think critically in
today’s world, using skills in logic, statistics, and reasoning.

For younger people who aren't in college yet, but will grow up in a world full of fake news,
there’s a great example from a fifth-grade teacher in Irvine, California. Scott Bedley teaches his
students how to spot fake news by giving them a list of things to look for and then testing them
with examples.

He explains that fake news is news that seems accurate but isn't trustworthy. He gave an
example of the fake story that the pope endorsed a certain presidential candidate. To teach his
students, he made a game where they had to figure out which news stories were fake and which
were real. His students loved the game, and some wouldn’t even go to recess until they got
another chance to play.

What are the tricks he taught? Actually they are no tricks at all. A fifth-grader can do it. So what
excuse do the rest of us have?
1. Look for copyright.
2. Verify from multiple sources.
3. Assess the credibility of the source (e.g., how long has
it been around?).
4. Look for a publication date.
5. Assess the author’s expertise with the subject.
6. Ask: does this match my prior knowledge?
7. Ask: does this seem realistic?
The only problem with Bedley’s system? Now his fifthgraders won’t stop fact checking him.

Implications for Post-Truth

The issue of fake news is closely connected to the idea of post-truth, and many people think they
are the same thing. But that's not exactly true. It's like saying having nuclear weapons
automatically means the world will end. Just because a weapon exists doesn't mean we have to
use it. How we handle the challenges created by technology is what really matters. Social media
plays a big part in spreading post-truth, but it's just a tool, not the problem itself. It's an old
saying that "a lie travels halfway around the world before the truth can even get dressed," but
that says more about human nature than it does about our ability to do better. Technology can
spread lies, but it can also spread the truth. If we believe in something worth fighting for, let's
stand up for it. If our tools are being misused, let's take control of them again.
DID POSTMODERNISM LEAD TO

POST-TRUTH?

So much of left-wing thought is a kind of playing with

fire by people who don’t even know that fire is hot.

—George Orwell

Some people think that to fix the post-truth problem, we should look to academics, who have
spent years studying things like evidence, critical thinking, skepticism, and biases. But it’s kind
of embarrassing to admit that one of the main causes of post-truth actually comes from colleges
and universities.

The idea of postmodernism has been around for over a century, and it's been applied to art,
music, literature, and more. However, just because it’s been around for a long time doesn’t mean
it’s easy to explain. Philosopher Michael Lynch says, “pretty much everyone admits that it is
impossible to define postmodernism. This is not surprising, since the word’s popularity is
largely a function of its obscurity.” I’ll try to explain it as simply as possible.

When people talk about postmodernism today, they’re usually referring to a movement that
started in universities in the 1980s, especially after Jean-François Lyotard’s 1979 book The
Postmodern Condition: A Report on Knowledge. Other thinkers like Martin Heidegger, Michel
Foucault, and Jacques Derrida also played big roles in postmodern ideas, but I’ll just focus on a
few key ideas. One of those was Derrida’s idea of “deconstructing” literature. He argued that we
can’t just assume that an author knows exactly what they meant in a text, so we have to break it
down and understand it in the context of political, social, historical, and cultural ideas. This idea
became very popular in humanities departments in universities across North America and
Europe during the 1980s and 1990s because it encouraged scholars to question everything they
knew about classic works of literature.
This idea was quickly taken up by sociologists and others who thought it should apply not only
to literature, but to everything, since almost anything can be seen as a “text.” War, religion,
economics, and even our behaviors could all have meanings that the people involved might not
even understand. The idea that there’s a right or wrong answer to what something means was
now questioned. Even the idea of truth was up for debate because critics realized that when they
interpreted something, they were also bringing their own values, history, and assumptions into
it. This meant that there could be many different answers to any interpretation, not just one.
The postmodern approach is about questioning everything and not taking anything at face
value. There’s no one right answer—only different stories.

Alexis Papazoglou talks about Friedrich Nietzsche’s philosophy, which came before
postmodernism, and explains that when we understand that there’s no such thing as absolute
truth, the only alternative is “perspectivism.” This means there’s no single, objective way to see
the world—only different perspectives on it.

This is the first main idea of postmodernism: there is no objective truth. If this is true, how
should we react when someone says something is true?

Here’s the second idea: Any claim of truth is just a reflection of the person’s political beliefs.
Michel Foucault said that society is shaped by language, but language is full of power struggles.
This means all knowledge claims are actually just ways for the powerful to make others accept
their views. Since there’s no absolute truth, anyone who claims to know something is really just
trying to control us, not teach us. Power lets people decide what’s true, not the other way
around. If there are many perspectives, forcing everyone to accept one is a form of fascism.

Some people might say this explanation doesn’t do postmodernism justice or that postmodern
ideas aren’t connected to post-truth. I believe that studying postmodernism more deeply could
challenge the idea that it supports right-wing beliefs. But I also think that postmodernists have
helped create this situation by making their ideas so complex, then being surprised when others
use them for purposes they wouldn’t agree with.

It's true that right-wing people who use postmodern ideas don't seem interested in the details. If
they need a tool, they’ll use it in whatever way works, even if it's not the right way. In fact, thirty
years ago, conservatives were criticizing postmodernism as a sign of left-wing decay! It's ironic
that the right has gone from attacking postmodernism, like in Lynne Cheney’s Telling the Truth,
to now using some of its ideas. This isn’t to say that postmodernists are entirely to blame for
how their ideas have been misused, but they do have some responsibility for undermining the
idea that facts matter when we look at reality. They didn’t predict how much damage this could
cause.

There are valid questions about truth and objectivity, and philosophy has always been about
those debates. But completely rejecting truth and objectivity goes too far. If postmodernists had
only focused on interpreting literary texts or even understanding the deeper meanings behind
cultural behaviors, that might have been okay. But they didn’t stop there. They eventually went
after natural science too.

The Science Wars

As you might expect, there was a huge clash between scientists like physicists, chemists, and
biologists, who believed they were discovering the truth about reality by testing their ideas with
evidence, and the "social constructivists," who argued that all of reality—including scientific
theories—was created by society, meaning there was no such thing as objective truth. The
"strong programme" of the sociology of science wasn’t exactly the same as what was happening
in literary criticism or cultural studies, but they both shared the idea that truth depends on
perspective and that all knowledge is socially created. The social constructivists wanted to do
the same thing to science that postmodernists did to literature: they wanted to challenge the
idea that there is one correct perspective.

The sociology of science, which gave birth to the idea of the social construction of science, had
an interesting point: if scientists say they’re studying nature, then who’s studying the scientists?
If scientists claim their theories are “true,” shouldn’t we look at how those theories are made in
the first place? This is how the field of "science studies" began. The "strong programme" took
this further. The "weak" version of the idea suggested that failed theories were due to mistakes
or bias in the scientific process, but the strong version claimed that all theories, whether true or
false, were shaped by ideology. If you don’t believe in absolute truth, then it’s unclear why
scientists choose some theories over others. Saying it's because of evidence isn’t enough.

Some people argued that science wasn’t really about discovering the truth, but about scientists
boosting their own status by pretending to be experts. They said that instead of uncovering the
truth about nature, scientists were just pushing their own political views and gaining power.
Others pointed out that scientific language was sexist and showed an exploitative attitude,
saying it was like “prying secrets loose” from mother nature. One person even called Newton’s
Principia Mathematica a “rape manual.”

Then the scientists fought back.

In 1994, Paul Gross, a biologist, and Norman Levitt, a mathematician, wrote a book called Higher
Superstition: The Academic Left and Its Quarrels with Science. It was a strong criticism and a call to
action. They said postmodernism was nonsense and that the people who practiced it didn’t
understand how science really worked. Worse, they argued that these critics were missing the
real point of science: dealing with facts, not values. While Gross and Levitt’s argument lacks
some of the careful thought it needed, they were fighting a battle. And the next battle was going
to be huge.

The Sokal Hoax

Sometimes, the best way to criticize something is by making fun of it. In 1996, physicist Alan
Sokal decided to do just that. He wrote a paper full of postmodern nonsense mixed with some
made-up stuff about quantum mechanics. He called it "Transgressing the Boundaries: Towards a
Transformative Hermeneutics of Quantum Gravity." Sokal didn’t just send it to any journal—he
sent it to Social Text, one of the top postmodern journals. His idea was that if what he read in the
book by Gross and Levitt was true, he could get a fake paper published if it “(a) sounded good
and (b) flattered the editors' views.” And it worked. Social Text didn’t use “peer review,” so no
other scientists looked at the paper to catch the nonsense. They published it in their next issue,
which was ironically all about “The Science Wars.”

Sokal later described his paper as a mix of ideas from Derrida, quantum gravity, Lacan, and
general relativity. He used vague terms like “nonlinearity,” “flux,” and “interconnectedness” to
link everything together. The paper had no real argument, just random quotes from famous
people, wordplay, and claims without any proof. For example, in one part of the paper, Sokal
claimed, “physical ‘reality’ … is at bottom a social and linguistic construct.” He pointed out how
ridiculous this claim was, saying that anyone who thinks the laws of physics are just social ideas
should try jumping out of his 21st-floor apartment window and see what happens.

Although his paper was a joke, Sokal had a serious reason for doing it. He was frustrated not
just with the “playing with ideas” that Gross and Levitt criticized, but with the fact that this
type of thinking was politically harmful. He believed it was giving liberalism a bad reputation.
For centuries, liberals had supported science and reason, but now he felt that some in academia
were attacking the very idea of evidence-based thought, which hurt efforts to improve society.

He pointed out that thinking about the "social construction of reality" wouldn’t help us find
cures for diseases like AIDS or stop problems like global warming. We can’t fix false ideas in
history, politics, or economics if we don’t believe in truth and falsehood.

Once Sokal’s prank was exposed, it caused a huge stir. The editors of Social Text were accused of
bad judgment, but the damage was done. Many people saw it as proof that postmodernism was
intellectually weak. So, the scientists went back to their research.

However, something unexpected happened. Once an idea is out there, you can’t take it back.
Although it embarrassed postmodernism, it also brought a lot of attention to their ideas, and
some people who hadn’t seen them before became interested—especially people on the right.

Right-Wing Postmodernists

The whole "science wars" situation raised an important question: can postmodernism be used
by anyone who wants to attack science? Does it only help liberals (since most people in literary
criticism and cultural studies are left-wing), or can it be used by others too? Some argue that
right-wing people, who had issues with certain scientific ideas (like evolution), started using
postmodern techniques to challenge the idea that scientific theories are the best way to
understand the world. This leads to another question: can we now talk about "right-wing
postmodernism" that uses doubts about truth and objectivity to say that all truth claims are
political? It would be ironic if ideas from the left were used by the right to attack science and
evidence-based thinking. But if this is true, it would explain part of what’s causing post-truth
thinking.

In 2011, Judith Warner argued that postmodernism helped right-wing people deny science. In
her article “Fact-Free Science,” she said that "questioning accepted fact, revealing the myths
and politics behind established certainties, is a tactic straight out of the left-wing playbook."
But now, questioning things like the science behind global warming is common for Republicans
trying to appeal to their conservative base. She claimed that "attacking science became a sport
of the radical right." Warner even pointed out some quotes from postmodernists themselves
who seemed worried that their ideas were being used by conservatives.
However, science writer Chris Mooney disagreed with Warner’s point. He thought her argument
was "so wrong that one barely knows how to begin." First, he said it didn’t make sense that
conservatives would be influenced by the complicated arguments from left-wing academics. He
reminded people that since the 1970s, conservatives had created their own think tanks—like
ones that now deny climate change—so they could have their own sources of "expertise" outside
of academia. To them, 1990s postmodernism would look like useless academic nonsense. But
Mooney’s biggest issue with Warner’s argument was that climate change deniers don’t act or
sound postmodern in any meaningful way.

Mooney then speculates—without much proof—that most science deniers actually believe in
truth, and he tries to make fun of them:

The idea that science represents "truth" is something climate change deniers agree with. They
believe they are right and that the scientific consensus about global warming is
wrong—objectively. They aren't questioning whether science is the best way to find the truth;
instead, they’re acting like their own scientists know the truth. Can you imagine someone like
US Senator James Inhofe quoting Derrida or Foucault? The idea is just laughable.

When I read this, I feel like it’s an outdated opinion. Things have changed since 2011, and I
think there’s still evidence that Warner was right back then, and Mooney just missed it.

As we saw earlier in chapter 2, the idea that Trump’s supporters or people around him would
need to read postmodernist literature to be influenced by it doesn’t make sense when we look at
how doubt is “manufactured.” Mooney is right that a lot of the work is done in ideological think
tanks. By the time it reaches government officials or lobbyists, it’s just a bunch of talking points.
But it’s also important to remember that the strategies used in one fight about science are often
reused in the next. For example, the “tobacco strategy” was used long after the cigarette and
cancer debate, even in fights over things like acid rain and the ozone hole. And we should also
remember the order of events. Before the climate change debate, the battle was over evolution.

There’s no doubt that postmodernist ideas played a big role in the evolution of this debate, as
Creationism turned into “Intelligent Design” (ID) and started pushing to “teach the
controversy” about ID theory versus evolution in public school biology classes. How do we know
this? Because Phillip Johnson, one of the founders of ID theory and a creator of one of the think
tanks Mooney mentions, said so.
In a groundbreaking article, philosopher Robert Pennock argues that “the deep threads of
post-modernism … run through the ID Creationist movement’s arguments,” which can be seen
in the writings and interviews of its key leaders. He even goes as far as saying that “Intelligent
Design Creationism is the bastard child of Christian fundamentalism and postmodernism.” He
makes this point by looking at the statements of Phillip Johnson, “the godfather of the ID
movement.”

Pennock tells an interesting story about the founding of the Discovery Institute in Seattle,
Washington, and its support from “deep-pocket right-wing political backers.” He says that even
today, “the Discovery Institute is still flogging the postmodern horse.” When did this horse get
created? Pennock believes it was mostly because of Johnson’s influence. It’s not hard to see
postmodernism in Johnson’s work; he openly embraces it. By studying Johnson’s published
writings and interviews, Pennock has found statements that can’t be ignored:

Johnson explains that the problem for Christians is that the debate over evolution has always
been framed as a battle between the Bible and science. People then see it as defending the Bible
against science. The issue with this is that people see science as objective truth, so it looks like
they’re arguing faith against facts. Johnson says his plan is to “deconstruct those philosophical
barriers” and that he is “relativizing the philosophical system.” He even admits that he’s a
“postmodernist and deconstructionist” just like others, but with a different target.

In another interview, Johnson openly says he’s using the “strong programme” of the sociology of
scientific knowledge, which Pennock points out is “not the same as, but does have close
conceptual affinities to postmodernism.” Johnson admits that he has read this literature and
wants to use it to defend ID theory against the “objective” claims of evolutionary science. He
says, “the curious thing is that the sociology-of-knowledge approach has not yet been applied to
Darwinism. That is basically what I do in my manuscript.”

In his article, Pennock points out many times when Johnson shows his plan to use
postmodernism to challenge the authority of evolution by natural selection and defend
Intelligent Design (ID) theory as an alternative. Pennock explains Johnson's strategy:

Don't believe that science is connected to reality. Evolution is just a made-up story, even though
it’s told by scientists. According to postmodernism, science isn’t any more valid than other ways
of understanding the world. Each group can start with its own beliefs. ID creationists are just as
justified in starting with the idea of God's creation and will for humans.

It’s clear that postmodernism influenced ID theory. It’s also clear that ID theory became a model
for how climate change deniers would later approach their own battles: attack the existing
science, fund your own experts, make the issue seem “controversial,” promote your side through
media and lobbying, and watch the public react. Even if right-wing politicians and science
deniers weren’t reading Derrida and Foucault, the basic idea reached them: science doesn’t have
the truth all figured out. So, it’s not unreasonable to think that right-wingers are using some of
the same postmodernist arguments and methods to attack other scientific claims that don’t fit
their conservative views.

Is there any proof of this? We can look at some postmodernists who’ve admitted they’re worried
about how their ideas have been used for right-wing purposes. Bruno Latour, one of the
founders of social constructivism, wrote in 2004 that he got concerned when he saw an editorial
in the New York Times that said:

Most scientists agree that global warming is caused mainly by human-made pollutants, which
need strict regulation. Mr. Luntz, a Republican strategist, seems to admit this when he says,
“The scientific debate is closing against us.” However, he advises emphasizing that the evidence
isn’t fully complete. He says, “If the public believes the scientific issues are settled, their views
on global warming will change. So, you need to keep saying that the scientific certainty isn’t
there.”

Latour’s reaction is like that of an arms dealer finding out one of his weapons was used to harm
someone innocent:

Do you see why I’m worried? I’ve spent some time showing that “the lack of scientific certainty”
is part of how facts are made. I made it a “primary issue” too. But I didn’t mean to trick the
public by hiding the certainty of a closed argument—or did I? I hope I didn’t. I wanted to free
the public from taking facts as obvious truths too quickly. Was I wrong? Have things changed so
fast? Even worse, the “weapons factory” is still open. There are entire Ph.D. programs making
sure American students learn that facts are made up, that there’s no such thing as unbiased
truth, that we’re always shaped by language, and that we always speak from a specific point of
view. And now, dangerous extremists are using these same ideas to destroy important evidence
that could save lives. Was I wrong to help create this field called science studies? Is it enough to
say we didn’t mean what we said? Why does it feel wrong to say that global warming is a fact, no
matter what? Why can’t I just say the argument is closed for good?

You won’t find a stronger expression of regret in academia than this. And Latour isn’t the only
postmodernist who’s noticed how their ideas were used by those denying science. Michael
Berube, a humanist and literary critic, wrote this in 2011:

Now, the people who deny climate change and believe in young-Earth creationism are attacking
natural scientists, just like I predicted—and they’re using some of the very arguments created by
an academic left that thought it was only talking to people who agreed with it. Some common
leftist ideas, along with the left's distrust of “experts” and “professionals” who think they know
better than us, have been used by the right as a powerful tool to discredit scientific research.

He’s so ashamed that by the end of his article, Berube seems willing to make a deal:

I’ll admit that you were right about how science studies could go terribly wrong and help
ignorant or reactionary people. And in return, you’ll admit that I was right about the culture
wars and that the natural sciences wouldn’t be safe from the right-wing noise machine. And if
you’ll go a step further and acknowledge that some well-informed critiques of real science are
valid (like the criticism that medicalizing pregnancy and childbirth after WWII had some
negative effects), I’ll agree that many critiques of science and reason by humanists are not
well-informed. Maybe then we can focus on how to develop safe, sustainable energy and other
practices to keep the planet habitable.

This self-reflection from the left is completely ignored by those who are scared that post-truth
will now be blamed on postmodernism, but the link from science denial to full-on denial of
reality itself seems clear. So what would applying postmodernism to post-truth politics look
like? It looks a lot like the world we live in now:

If there are no real facts, only different interpretations, and millions of Americans are ready to
accept your view without questioning it, then why stick to the idea that there’s a clear line
between fact and fiction? If you see a period of cold weather as proof that climate change isn’t
happening, and millions of others agree with you, then climate change must be a hoax. If your
personal experience tells you there were record crowds at the inauguration, then there
were—any aerial photos that show otherwise are just another perspective.
You can almost imagine Kellyanne Conway defending Sean Spicer’s use of “alternative facts.”
This completely misses the original point of postmodernism, which was to protect the poor and
vulnerable from being taken advantage of by those in power. Now, it's the poor and vulnerable
who will suffer the most from climate change. Sokal's prediction is almost coming true—how
can the left fight against right-wing ideas without using facts? This is the price of treating ideas
like they don’t matter. It’s easy to play around with the truth in academia, but what happens
when these ideas fall into the hands of science deniers, conspiracy theorists, or politicians who
believe their feelings are better than real evidence?

So, what’s the truth? Does the left believe in truth or not? Some people might be in an
uncomfortable spot, feeling like they have to either help the enemy or defend the idea that truth
really exists. The question remains: How can we be sure that postmodernism has led to
post-truth, where reality itself is questioned? Since Trump became president, this question has
gained more attention. Some mainstream articles now take the question seriously, but some still
think it’s just a guess unless we can prove that Kellyanne Conway is reading Derrida. Some even
argue that post-truth has been around longer than we think, and postmodernism just gives us
the language to talk about it, though it isn’t the direct cause.

But there’s one philosopher who’s willing to make a clear connection. In a 2017 interview with
The Guardian, Daniel Dennett places the blame for post-truth on postmodernism:

"Philosophy hasn’t done well in dealing with questions of truth and facts. Maybe people will
start realizing that philosophers aren’t harmless after all. Sometimes, ideas can have scary
consequences that really happen. I think what the postmodernists did was truly bad. They’re
responsible for the trend that made it okay to be cynical about truth and facts. You’d have people
saying, 'Well, you’re one of those people who still believes in facts.'"

Is there stronger evidence for this? Something like what Roger Pennock did to show
postmodernism’s role in ID theory? Actually, yes, there is.

Trolling for Trump

To understand the rise of post-truth (or Trump), we need to look at the role of alternative media.
Without websites like Breitbart, Infowars, and other alt-right media, Trump probably wouldn’t
have been able to get his message out to the people most likely to believe it. The key point here,
as we saw in chapter 5, is that news is now all over the place. People aren’t just getting “the
truth” from one or two sources anymore. In fact, they’re not even getting it just from “the
media.” A lot of Trump’s support during the election came from alt-right bloggers. One of the
biggest was Mike Cernovich.

Mike Cernovich is a pro-Trump blogger, who calls himself an “American nationalist,” and loves
conspiracy theories. He has 250,000 followers on Twitter. But he’s not just a regular blogger.
He’s been written about in The New Yorker and the Washington Post, and even interviewed by CBS
news anchor Scott Pelley because of how much influence he had in the 2016 presidential
election. Some people dismiss him as just another creator of “fake news.” He’s the person behind
the #HillarysHealth tweets, claiming Hillary Clinton was dying. Do you remember the
#pizzagate story that said Bill and Hillary Clinton were running a child sex ring out of a pizza
restaurant in DC, which even led to someone being shot? Cernovich was one of the people
pushing that story. He’s also accused the Clinton campaign of being part of a satanic sex cult. In
his interview with The New Yorker, Cernovich shared some of his other controversial ideas, like
saying date rape doesn’t exist and that his first marriage was ruined by “feminist ideas.”

Cernovich caught the attention of the Trump administration. In April 2017, Donald Trump Jr.
tweeted that Cernovich should “win the Pulitzer” for exposing the story about Susan Rice
supposedly unmasking intelligence reports about Trump campaign officials. When Kellyanne
Conway heard Cernovich was going to be interviewed by Scott Pelley, she told her followers to
watch it or read the whole interview and even linked them to Cernovich’s website. One of
Cernovich’s critics said, “I think Conway and Trump Jr. trying to lift up Cernovich shows a lot
about Trump’s White House and how they will turn to conspiracy theorists if it helps distract
from things that hurt them.”

Cernovich clearly has a lot of influence. But what does this have to do with postmodernism? In
The New Yorker article, there's an interesting point he makes:

"Let’s say, for the sake of argument, that Walter Cronkite lied about everything. Before Twitter,
how would you have known? Look, I read postmodernist theory in college. If everything is a
narrative, then we need alternatives to the dominant narrative. I don’t look like a guy who reads
Lacan, do I?"

Cernovich might seem like someone who isn’t into modern ideas, but he’s actually pretty
well-educated. He has a law degree from Pepperdine and it seems like he paid attention in
college. He’s making a familiar point: If there’s no single truth, and everything is just a matter of
perspective, how can we really know anything? Why not question the mainstream news or
believe in conspiracy theories? After all, if news is just a form of political expression, why not
create your own version of it? Who gets to decide what facts are important or whose perspective
is correct? This is how postmodernism leads to the idea of post-truth.
FIGHTING POST-TRUTH

We have now sunk to a depth at which restatement of the

obvious is the first duty of intelligent men.

—George Orwell

On April 3, 2017, Time magazine released a cover story that asked, “Is Truth Dead?” This cover
art was striking and reminded people of another Time cover from the 1960s, which asked the
same question about God. In 1966, America was going through a tough time: President Kennedy
had been assassinated, the Vietnam War was escalating, crime was rising, and people were
losing trust in their government. It was a time for Americans to reflect on where the country
was headed. The new Time cover asking about truth came during the Trump presidency.

In the article, Time editor Nancy Gibbs asks big questions about our belief in truth, especially
when a president treats it carelessly. She writes that for Donald Trump, shamelessness isn’t just
a trait; it’s a strategy. Trump has said many things that were proven false, like exaggerating his
crowd size, claiming voter fraud, or saying he was wiretapped. But Gibbs points out a deeper
issue: What does Trump actually believe? Is it still lying if he truly believes what he says? Where
do we draw the line between lies, exaggeration, and delusion? Or, as Trump’s advisor Kellyanne
Conway put it, between facts and “alternative facts,” where the conclusions he wants people to
reach are different from the actual evidence?

The article notes that 70% of Trump’s campaign statements were considered false, and yet, he
still won the election. This makes us wonder if the threat to truth is bigger than any one person.
So, the question on Time’s cover isn’t just a dramatic one—it’s a real concern: Is truth dead?

In this book, we’ve explored the causes of post-truth, because it’s hard to fix a problem without
understanding where it came from. Now, we need to ask: Can anything be done about it? In
2008, Farhad Manjoo published a book called True Enough: Learning to Live in a Post-Fact Society.
Manjoo saw this problem coming even before smartphones existed and before Barack Obama
became president. One of the examples Manjoo discusses is the “Swift Boat Veterans for Truth”
campaign, which was designed to attack John Kerry during the 2004 election. This was one of
the first big examples of using bias and fake stories in the media. With hindsight, it’s clear how
these ideas continued into 2016, where media fragmentation, bias, and the decline of objectivity
made it harder to know what was true or even what truth meant.

What does Manjoo suggest we do to fight post-truth? Unfortunately, not much. In his chapter
titled “Living in a World without Trust,” he doesn’t give much practical advice, except to
“choose wisely” what we believe. Maybe it’s too much to expect him to provide solutions since
he saw the problem coming. But now that we’re living in it, we need to figure out how to deal
with it. So, can we really live in a world without facts, like Manjoo asks?

For me, I don’t want to just learn to live with it. The goal is not to accept a world where facts
don’t matter, but to stand up for the truth and learn how to fight back. Here’s one lesson we
should all learn: always fight against lies. John Kerry learned this the hard way during the "Swift
Boat Veterans for Truth" campaign. Right-wing veterans spread lies to destroy Kerry’s
reputation. Even when one of the veterans, George Elliot, admitted he lied, it was too late. By
then, money was pouring in from supporters, and the lie had already spread. Kerry made the
mistake of not responding for two weeks, and the lies kept hitting him on TV. He lost the
election by just a few thousand votes in Ohio. He didn’t realize we were entering a world of
post-truth.

The lesson here is clear: don’t let lies go unchallenged. Don’t assume that people won’t believe
outrageous things. Lies are told because someone thinks there’s a chance people will buy into
them. In today’s world, with fake news and political games, we can’t rely on people to just use
common sense. The goal of challenging a lie isn’t to change the liar’s mind—they’re probably
too far gone. But we can still help others who might be confused. If we don’t speak up, those
who are still undecided could be tricked into believing lies. Calling out lies is crucial, and in the
world of post-truth, we must stand up to falsehoods before they spread further.

Even though the other side might be loud, having the facts on our side is powerful. The truth
can only be ignored for so long. For example, the media stopped giving equal time to both sides
of the vaccine debate after a measles outbreak in 2015. The truth about the fraud behind the
original vaccine-autism claim became clear. Suddenly, there were no more debates between
experts and skeptics on TV. False balance didn’t seem so great once people started getting hurt.
Can this happen with other topics, like climate change? In some ways, it already has. For
example, in July 2014, the BBC decided to stop giving equal time to climate change deniers. The
Huffington Post did the same thing in April 2012. Its founder, Arianna Huffington, explained:

"In all our stories, especially those on controversial topics, we try to include the strongest
arguments from all sides, aiming for both clarity and depth. Our goal is not to please the people
we report on or create stories that seem balanced, but to find the truth. If the evidence strongly
supports one side, we make sure to reflect that in our reporting. We want to give our audience
confidence that all sides have been considered fairly."

But will this actually help? If we’re really in a post-truth world, it’s unclear if anything the media
does will make a difference. If our opinions about something like climate change are already
shaped by our biases and politics, how can we change our view? Why wouldn’t we just switch to
something else? Even if we hear the truth, won’t we just reject it?

Actually, no. Not always. Even though things like confirmation bias and other influences we’ve
talked about are strong, research shows that repeating true facts can eventually have an impact.
For example, in a study by David Redlawsk and others, they asked the question, “Do people with
strong biases ever get it?” They know that people who are biased often reject information that
goes against what they believe, which can even make them hold onto false ideas more strongly.
But can this go on forever? Redlawsk and his team found that there’s a point where people do
start to change their views when they keep hearing information that goes against their beliefs.

They found that there is an “affective tipping point,” meaning that people aren’t totally immune
to information that challenges their beliefs. Another study by James Kuklinski and his team
showed that even though people can be really stubborn about their false beliefs, it’s still possible
to change their minds if you keep showing them the correct facts over and over. It’s not easy, but
it’s possible.

This makes sense, right? We’ve all heard stories about people who “won the Darwin Award” by
ignoring reality until it led to their downfall. It doesn’t seem possible that evolution would let us
resist the truth forever. Eventually, when it becomes important to us, we can choose to give up
our beliefs instead of rejecting the facts. There’s strong evidence that this can happen, not just
in experiments, but in real life too.
Take the city of Coral Gables, Florida, for example. It’s only nine feet above sea level, and
scientists predict it will be underwater in a few decades. After new mayor James Cason, a
Republican, was elected, he heard a lecture about climate change and how it would affect South
Florida. He was shocked. “I’d read some articles here and there, but I didn’t realize how much it
would impact the city I’m now leading.” Since then, he’s tried to raise awareness, but it’s been
tough.

Some people say, “I don’t believe it.” Some say, “Tell me what I can do about it, and I’ll care.”
Others say, “I have other things to worry about right now; I’ll deal with it later.” And some just
say, “I’ll leave that to my grandkids to figure out.”

Cason is starting to look into the legal responsibility for climate change. He’s still raising
awareness, hoping that other Republicans on a national level will take global warming seriously
before it’s too late. Right before one of the 2016 Republican debates, Cason wrote an opinion
piece in the Miami Herald with his Republican colleague, Mayor Tomas Regaldo of Miami. They
wrote:

“As strong Republicans, we understand our party’s concern about government overreach and
unnecessary regulations. But for us and most other leaders in South Florida, climate change isn’t
a political issue. It’s a serious crisis we need to address—and quickly.”

If the word “schadenfreude” didn’t already exist, progressives might have had to create it right
now… except for the fact that we’re all in this together—and we can’t afford to feel too smug.
Even if people deny the facts, the truth has a way of showing up. When the water starts to rise
around their $5 million homes or their businesses are affected, they will have to listen. But does
this mean we should just wait for that to happen? No. We can still support critical thinking,
investigative reporting, and call out liars. Even before the water rises, we should figure out how
to "hit people between the eyes" with the truth.

However, this strategy needs to be used carefully. Research shows that when people feel
threatened or insecure, they are less likely to listen. In a study by Brendan Nyhan and Jason
Reifler, people were given a self-affirmation exercise, then shown new information. The idea
was that people who felt better about themselves might be more open to accepting new
information that corrected their wrong beliefs. While this worked a little for some topics, it
wasn’t consistent. Another finding from the study was stronger: information presented in
graphs was more convincing than stories. So, what should we take from this? It’s probably not
helpful to shout at someone who is misinformed, but it’s probably more effective to show them a
graph instead.

It's hard to keep facts out of politics, especially when we think the "other side" is being
unreasonable or stubborn. It can help to remember that we have the same tendencies. One
important way to fight post-truth is by fighting it within ourselves. Whether we're liberals or
conservatives, we all have cognitive biases that can lead to post-truth thinking. We shouldn’t
think that post-truth is only caused by other people, or that it's someone else’s problem. It's easy
to point out when others don’t see the truth, but how many of us are willing to question our own
beliefs? How many of us are willing to doubt something we want to believe, even when a small
voice inside us says we might not have all the facts?

One big problem with critical thinking is getting stuck in a constant loop of confirmation bias.
If you mostly get your information from just one source or feel strongly about what you're
hearing from one place, it might be time to mix it up. Think about the people in the “2, 4, 6”
experiment who never tried to prove their beliefs wrong. We shouldn’t make that mistake. This
doesn’t mean we should start listening to fake news or treat Fox and CNN as equally
trustworthy. But it does mean we need to be smart about where we get our news from and ask
ourselves why we think something is fake. Is it just because it makes us angry, or do we have
good reasons? Especially when we hear things we want to believe, we need to be more skeptical.
This is a lesson science teaches us.

There’s no such thing as "liberal science" or "conservative science." When we’re asking a factual
question, the most important thing should be the evidence. As Senator Daniel Patrick Moynihan
said a long time ago (about something else): “You are entitled to your own opinion, but not your
own facts.” The power of science is that it always checks its beliefs against the facts and changes
those beliefs when new evidence comes in. Can we try to bring that same attitude to other facts
in our lives? If not, we might face a bigger problem than post-truth.

Are We Entering the Pre-Truth Era?

In a recent article in the Washington Post, Ruth Marcus was especially upset by Trump’s
interview with Time magazine. In the interview, Trump said a lot of things that made fact
checkers angry. He got a lot of “Pinocchios” from the Washington Post and criticism from the
New York Times and other news sources for saying things that weren’t true (or were outright
lies). But Marcus was more concerned about something beyond Trump’s dishonesty.

In the interview, Trump said “I’m a very instinctual person, but my instinct turns out to be
right.” What he seemed to mean was that even if some of the things he said didn’t match the
facts, they were still true. He didn’t mean that the evidence was hidden or that only he knew it.
He acted like just believing something made it true. It wasn’t just that he thought he could
predict things correctly; he seemed to believe he could actually change reality. As Marcus said:
“If an assertion isn’t true, no worry. President Trump will find a way to make it so, or at least
claim it is.”

For example, at a rally on February 11, 2017, Trump talked about “what’s happened last night in
Sweden.” The people in Sweden were confused because, to them, nothing had happened. It
turned out Trump was referring to a story he saw on Fox News about immigrants in Sweden, but
nothing had actually happened. Then, two days later, after Trump had talked about it, riots
broke out in an immigrant neighborhood in Stockholm. In his interview with Time, Trump
claimed he was right:

“Sweden. I make the statement, everyone goes crazy. The next day they have a massive riot, and
death, and problems... A day later they had a horrible, horrible riot in Sweden, and you saw what
happened.”

Did that mean Trump was "right"? Of course not. The riot wasn’t the night before, it wasn’t
massive, and no one died. But in Trump’s mind, it proved he was right.

Here’s another example: On March 4, 2017, Trump tweeted that President Obama had “wires
tapped” at Trump Tower during the presidential campaign. He was probably reacting to a story
he saw on Fox News, but he couldn’t provide any proof. After checking with the FBI, NSA, and
other reliable sources, no evidence of this claim was found. Then, on March 24, Rep. Devin
Nunes (a Republican who was the chairman of the House Intelligence Committee) held a press
conference. He said he had some serious new facts about Trump’s surveillance that he had
learned from a confidential source. It turned out that those “facts” had actually come from two
of Trump’s aides the night before. Later, it was discovered that Trump’s aides had been caught
up in a regular intelligence-gathering operation involving Russian officials. (We still don’t know
what Trump’s aides were talking about with those Russian officials.) But Trump saw this as
proof that he was right. He said, “So that means I’m right,” and he felt “vindicated.” Even though
he couldn’t have known about it at the time, and it’s unclear whether this kind of surveillance on
his aides even counts as “wires being tapped,” especially by Obama, Trump took credit for it.

So, what’s going on here?

According to Marcus, “it is not simply that Trump refuses to accept reality, it is that he bends it
to his will.” Another article from the Guardian puts it this way:

In Trump’s language, truth isn’t based on facts. Truthful statements don’t always accurately
describe what really happened. Instead, they give a rough or exaggerated idea of what might
have happened. Whether a terror attack in Sweden happened the night Trump said doesn’t
matter. It doesn’t matter that the riot wasn’t massive and there were no deaths. “Close” or
“maybe” is good enough. In Trump’s way of speaking, if people believe what he says, that makes
it true. If people disagree with him, that also makes it true. Lastly, Trump’s way of speaking isn’t
about truth—it’s about results. If a statement helps him get what he wants, it’s valuable and true.
If it doesn’t help him, it’s worthless and false.

One might wonder if this is post-truth or something else. Is it just a case where “objective facts
are less influential in shaping [belief] than appeals to emotion?” Or is it more like a delusion?
When Marcus talks about “pre-truth,” she seems to mean that Trump not only believes he can
predict what will happen but that his belief can actually make it happen. This isn’t based on any
proof he can show others; instead, it’s just a feeling that he can sense or even control the
future—or the past. Psychologists call this “magical thinking.”

Is this something we should worry about, or is it just normal for someone who judges things
based on whether they flatter him? Trump has tweeted many times that “any negative polls are
fake news.” And that’s true in his mind. But people worry about this because it might mean
either a strong effort to trick people into rejecting reality, or a complete break from reality itself.

I don’t think I can predict the future. But when we disconnect from the truth, we also
disconnect from reality. Just like how the water will keep rising in Coral Gables,
Florida—whether its people believe it or not—so will the problems caused by post-truth catch
up to all of us unless we fight back. We might be able to fool others (or ourselves) for a while and
get away with it, but eventually, we’ll pay the price for thinking we can create our own reality.
On January 28, 1986, the space shuttle Challenger exploded just 73 seconds after launching from
Cape Canaveral, Florida, killing all seven astronauts on board. The science behind the shuttle
had been thorough, and it wasn’t its first mission. After the disaster, President Reagan set up a
special commission made up of top scientists and astronauts to figure out what went wrong. The
engineering of the shuttle was fine, but it turned out there had been concerns about the rubber
O-rings not being able to handle cold temperatures, which would cause them to fail. The shuttle
wasn’t supposed to be launched in freezing temperatures, but January 28 was unusually cold for
Florida. So why was it launched? It was an administrative decision, even though some NASA
engineers disagreed.

The issue with the O-rings was clearly shown by Nobel Prize-winning physicist Richard
Feynman, who was on the commission. He dropped one of the O-rings into a glass of ice water
during a public hearing. The facts were clear. No amount of lies or spin could change them.
After the shuttle disaster, no one cared about the intuition of the NASA officials who thought
they could control reality. Feynman later said, “For a successful technology, reality must take
precedence over public relations, for nature cannot be fooled.”

Whether we call it post-truth or pre-truth, ignoring reality is dangerous. That’s the problem
with post-truth—it’s not just about letting our opinions and feelings shape our understanding of
facts, but about risking disconnecting from reality itself. But there’s another way.

We don’t have to be stuck in post-truth or pre-truth unless we allow ourselves to be. Post-truth
isn’t about reality; it’s about how people react to reality. Once we’re aware of our biases, we’re in
a better position to fight against them. If we want better news, we can support good sources. If
someone lies to us, we can choose not to believe them and challenge the lies. It’s up to us how
we respond to a world where some people try to deceive us. Truth still matters, as it always has.
Whether we understand this in time is up to us.

You might also like