Houdini – the OG debunking influencer?

Author: Maya Lopez (Blog Chief Editor)

Dealing with pseudoscience propaganda and facing the limitations in “debunking” approaches are our usual days here in CUSAP, but I started to wonder when did this “debunking” became a thing as we know it? As many of us share the sentiment, this day and age seems like a misinformation paradise with so much internet content with suspicious scientific evidence spreading virally. On the other hand, there are numerous content that attempt to fact check or “debunk” – pointing out the error of the information and exploring the actual science relevant to the topic -. These contents are often done by experts themselves – from cosmetic chemists addressing the suspicious beauty science to notable science communicators addressing exaggerated negativity towards science, or even discussing the “too ambitious attempt” to reinvent basic math. And these contents do pretty well, to a point where I’m sure it can be recognized as a whole genre. According to the Wikipedia page on “debunker” (yes, that’s the thing), one of the most ancient examples goes back to Cicero who “debunked” divination through his philosophical treatise published in 44BC. However today, I want to share a story of one name that keeps appearing in the modern section and gets referenced by multiple other notable debunkers: Harry Houdini. Pop culture icon who is well recognized through the various references from Kate Bush’s pop tune to J-drama Trick, to this day – Erik Weisz is a magician and an escape artist who is arguably the first celebrity debunker combating pseudoscience.

Houdini the medium buster and his influencer style feuds and drama:

In the 1920s, when he started to go full ham against spiritualists (which was a popular movement at the time – understandably being the time of grief for many people who just experienced WWI), he was already a well-known magician for nearly two decades. To be fair, stage magicians were apparently known to do these mystic debunking from the late 19th century, but his celebrity status, his continuous pursuit, and his “feud” with notable spiritualists and famous figures arguably brought the spotlight to these debunking at an unprecedented level.

He was a magician by trade, which allowed him to easily identify tricks the fraudulent mediums employed which fooled even scientists. In fact, such abilities shined when he was a committee member of Scientific America in 1922 which held an international competition to find scientific proof of ghosts, offering $5,000 to any psychic and medium (yes really) who could stand the tests of scientists. It was a legitimate attempt on the “modern science” against wildly popular psychic mediums. While many mediums shied away from the public test, some took the challenge. However, Houdini exposed these cases as a fraud.

Notable psychics (and their tricks) vs Houdini:

mediumassertion of the mediumtrick
George Valiantine of Wilkes Barre, Pennsylvania.Predicted aliens to visit Earth in the 1920s. Can communicate to spirits through a séance.Physically left the seat in the dark and touched the seance attendees as a “touch of the spirit”.
Mina Crandon, aka “Margery”From communicating with the dead via séance sessions and materializing “telepasmic hands”. She also claims to have been producing (?) “ectoplasm”.He actually debunked her in two of her séances each time trying to convey rest of the committee that she has been physically moving to “show the sign of ghosts” (via bell ringing, etc). She is one of the most notable figures he “combatted” so definitely check it out if you want to know the whole fiasco.
Joaquín Argamasilla, aka “Spaniard with X-ray Eyes”Clairvoyance; can therefore “read” numbers/dice within a box.Houdini specifically revealed that he was peeking at the number through a blindfold and a small lift of the box lid.
Nino PecoraroHas a deceased medium guiding him. Can make instruments play without touching.Houdini and others suspect him to be escaping rope ties and actually manipulating instruments. → events prevented via impromptu tightening of rope leading to a later confession.

As he further became a renowned medium-buster, Houdini further pursued his cause by going to séances undercover with a reporter and a police officer, further exposing what he deemed as nefarious activities that are “defrauding the bereaved.” While these continuous and successful debunk of notable psychics of the time can be seen as a feud on its own, perhaps the most notable feud is between a celebrity author, who you might not suspect to be a strong patreon of “unscientific spiritualism” when his most famous work is the Sherlock Holmes series…

Sir Arthur Conan Doyle thinks Houdini’s supernatural, while H.P Lovecraft co-authored denunking essay:

In fact, many of the notable medium tested in the Scientific America committee was brought in by the choice of none other than Sir Arthur Conan Doyle himself. While he is known for his medical degree and beloved for creating the epitome of the rational-detective archetype: Sherlock Holmes, he himself was deeply spiritual – beliefs ranging from communicating to the dead through séance and fairies.

Houdini and Doyle were initially friends, but Houdini’s persistent attitude against spiritualism eventually pulls them apart. Doyle would even invite Houdini to his house where his wife and self-acclaimed medium performed “automatic writing” to communicate to Houdini’s dead mother – only for Houdini to sit through hours of the session and only then reveal that all those 15 pages of English message in perfect grammar can’t be hers as she is not even fluent in English-. Furthermore, his debunking of Margery – who was unlike other publicity-seeking mediums and Doyle trusted extremely – caused a new suspicion on Doyle: what if Houdini is the most powerful medium? Doyle started to believe that all his magic acts were well…in fact magical and any medium performing in front of him failed because he was blocking their powers through his supernatural abilities (should I write a piece of Aurther the OG conspiracist?). Houdini was hoping that Doyle would realize that this was not the case, and one day invited Doyle to his mansion where he “performed slate-writing” – a method medium often used to supposedly communicate the message from the dead through unguided writing -. He essentially performed a magic trick, hoping to disillusion his friend with the following message:

Sir Arthur, I have devoted a lot of time and thought to this illusion … I won’t tell you how it was done, but I can assure you it was pure trickery. I did it by perfectly normal means. I devised it to show you what can be done along these lines. Now, I beg of you, Sir Arthur, do not jump to the conclusion that certain things you see are necessarily “supernatural,” or the work of “spirits,” just because you cannot explain them….

From 2006 biography: The Secret Life of Houdini (Atria Books)

However, Doyle’s belief was firm and their fallout only became a matter of time. Nevertheless, Houdini continued to pursue his fight against spiritualism and found notable collaborators (after death anyway,) along his journey. Houdini hiers  H. P. Lovecraft – now renowned creator of the cosmic horror genre, Cthulu lore in his own right – and his friend C. M. Eddy, Jr., for a project on a book to debunking religious miracles titled: The Cancer of Superstition.

Debunking of his lifetime:

Perhaps most famously, his final “evidence” against spiritualism was ultimately upon his death. He had told his wife – Bess – in advance a secret code which he promised that he would deliver whatever it took after death if such a spiritual realm existed. This approach – is not only romantic – but is arguably quite a scientific one in its philosophy (albeit not bulletproof). His hypothesis was there are no ghosts and an afterlife of the spiritual realm that can be contacted. So he sought to prove that by setting up a risky test (even though mediums may very well attempt various fraudulent techniques) which if mediums succeed, would disprove his theory suggesting that the spiritual realm is real and can be contacted. Perhaps this can be seen as his last glimpse of hope that somewhere out there a true medium can prove him wrong and the world of the dead actually exists to everyone’s comfort. Or perhaps he intuitively understood the need for a risky test and that he could only be certain of their fraud through an active attempt to disprove his theory.

Either way, while Bess encountered some attempts of fraud (which was later exposed), she did not encounter a genuine instance where the message: “Rosabelle believe” came through. 1936, after the one last unsuccessful séance on the roof of the Knickerbocker Hotel, Bess put out the candle that was burning beside a photograph of Houdini since his death. Later she is noted to have said “Ten years is long enough to wait for any man,” ending her search for one last voice from her loved one.

Legacy of debunking and backfire effect:

Houdini – as one of the OG celebrity debunkers – immediately shows his influence. Even in his days, apparently, multiple magicians followed suit: The Amazing Randi, Dorothy Dietrich, Penn & Teller, and Dick Brookz, to name a few. Furthermore, his debunking certainly remains iconic to this day inspiring modern debunkers across the world both in real life and as a fiction archetype.

However, I think the bright “success” of Houdini also illustrates the inherent limitations of debunking. Despite his debunking of individual mediums, for example, this does not necessarily put an end to the whole spirituaslim trend at the time. This itself is perhaps inevitable, but I think it speaks to how easy for “made-up” and pseudoscientific claims to proliferate while debunking one by one will always be a game of catch-up. The proliferation is perhaps unstoppable so long as the interest is there as a trend. Furthermore, it can be argued that the debunking also fueled the trend to some extent. It made a whole spectacle out of debunking and the harder you criticize the fraudsters, the “true medium” somewhere out there will seem ever more so valuable and coveted.

Finally, the relationship between Doyle and Houdini envokes the backfire effect – which is warned in the Debunking Guidebook (yes this is also a thing!) where if the message by the debunker spends too much time on the negative case, if it is too complex, or if the message is threatening – it can only strengthen the belief of those you are trying to debunk. The slate-writing magic performed by Houdini for example seems emblemeatic. Perhaps Doyle felt “threatened” (whether knowingly or not) about his intelligence as his message essentially was that just because you couldn’t understand the mechanism it’s not supernatural. Or maybe it was a more direct threat to his motivated belief – where Doyle’s attachment to the idea of the “spiritual realm” was much stronger than that of Houdini to a point where he couldn’t just let it go. Or perhaps not explaining how this trick was done (as Houdini often guarded his magic tricks and their IPs inevitably as part of his magic career) acted as a barrier and the final proof required to rule out the possibility that Houdini is actually supernatural.

Regardless of whether Houdini’s tactics and debunking are the best approaches against the modern pseudoscience epidemic, it is an interesting lesson that we can learn from the past. Perhaps there are no “perfect cure-all” tactics but let us all embrace our inner Houdini and wear our skeptic hats when presented with phenomena unexplainable by modern science. While supernatural and occult have their own charm (I’ll admit it, who doesn’t love good ghost hunts and mysteries!), let us consider if some of such explanations would feed into “defrauding the bereaved.” And at those moments let us be as brave as Houdini and be willing to stand our ground to say no, and propose an explanation that is as scientific as possible.

Featured Sources and recommended reading:

I hyperlinked most sources but here are a few that might be of your interest 😉

Scientific American Artilces:

https://www.scientificamerican.com/article/houdinis-skeptical-advice/

https://www.scientificamerican.com/article/scientific-american-vs-the-supernatural/

The debunking handbook:

https://skepticalscience.com/the-debunking-handbook-redirect-page.shtml

→ fun video on the mysterious death of Houdini (and his extent of debunking mystics)

CUSAP After Hours: Debate with compassion while sharping your philosophical razors

Author: Maya Lopez (Blog Chief Editor)

Following our first event in 2024 which was more of a lecture format, we then had a workshop on debating – but with a dash of compassion – to see if we can really change minds. This was organized by our two co-chairs of CUSAP, where we discussed the theories and practical tips on addressing conversations with misinformed individuals and some role-playing in a relaxed at-home evening at Queen’s. Here are some of the takeaway messages I got from the session, a few tricks that everyone can have under their sleeve to be a better communicator, and a handful of “philosophical razors” to further sharpen critical thinking.

So why should I even “debate”?:

A long long time ago in my previous life as a teenager, I used to attend academic debate events. I often hear people say you’ll learn to love the things you’re good at. But for me, receiving awards in debates was never really enjoyable. Academic debates (especially those meant to discuss truly debatable topics) are inherently about the art of debate and presentation. Hence typical signs of toxic online debaters could apply here too: it could resemble a play of words, a heated battle going back and forth, and tactfully trying to point out the opponent’s logical flaws (ie, being a nitpicking jerk aiming for the GOTCHA moment). Somehow the better I scored, the more I felt like a… horrible person.

And really, anyone doing this even in the calmest manner in everyday life is far from likable and most likely won’t convince others with different opinions. (Case and point, Socrates might have many philosophy and logical frameworks credited to him, but arguably what sent him to his death row is his bona fide troll-ness…). Furthermore, in academic debate, there is more or less a referee to fact-check – but who is there to do that in real-time in everyday conversations?

This is why I personally shied away from debates as I hit college – it was mentally draining and felt even like pulling the worst out of me. But naturally, there are times in life when you will encounter those with different opinions – hopefully just a “debatable” opinion – but what if some of such “opinions” are harmful? or dangerous? or demeaning? It may be more immoral to NOT debate – or really to show and persuade other ways of thinking. So this CUSAP event was also my personal journey to re-study debate in a different light – as a way to empathize and suggest different ways to look at things with compassion.

So then, what’s the trick?

We covered many theories during the session but here are a few highlights:

  • Golden minute → A technique commonly practiced by clinicians where they give about a minute at the beginning of patient examination to just let the patient describe all of their symptoms. Being attentive during this minute through other active listening techniques without interrupting can not only make your patient feel heard but also give a great opportunity to let their emotion and perceptions be shared. Similarly in the context of misinformation debate, the person who you are talking to might very well be in some sort of unease or (emotional) pain. So give them time to let it out. …and academic debate-wise, this is quite tactful because you let the opponent reveal their cards first… But all jokes aside, listening is a genuinely underestimated aspect of a conversation because it’s easy for us to carry in our bias too. We might assume where their misinformations are coming from and attribute that to certain personality characteristics. So let them speak before we make unfair assumptions.
  • Good faith principle → Speaking of unfair assumptions, this next technique literally is just that. It is apparently even a way of thinking by law, and as a word suggests, one should not assume that the misinformation/conspiracy theory is coming from a bad intent. Of course, the boundaries of mis and disinformation can be murky, but in most daily contexts, what are the odds of encountering individuals who profit off of spreading post-truth? Rather than an evil mastermind, it’s more likely that we are just talking to someone who’s genuinely confused, agitated, concerned, or scared.
  • New information over denial of info → Bombarding with denial (ie screaming telling YOU’RE WRONG), is quite ineffective at altering beliefs. Studies suggest a difficulty in people simply changing beliefs when they are presented with opposing facts because cognitively these two juxtaposing facts will need to compete over the position of the belief. So rather than confronting the misinformation head-on, we could just present new information that will make the listener think (or better yet question) the premises of their conclusion. 

Let’s say a person takes an anti-COVID vaccine stance which they say is because they can’t trust evil pharma and this is just a scam to make a profit. Sure, big pharma is a company too which is profit driven. But, I read that AZ didn’t make a profit with the Covid-19 vaccines. Also, there are worldwide equitable programs like COVAX, no? …Hopefully partially agreeing, but also showing new facts regarding their premise of all big pharma projects = bad will get them to explore views instead of if we were to just opposing their conclusion that COVID vaccine = bad. Of course, this is not bulletproof, and this can lead us to….

  • Steel-man tactic → The person you were talking to doubled down and argued that vaccines bad because big pharma bad. Okay, then you know what, let’s explore that together. You may know this tactic’s evil twin brother called straw-man which just dodges the bullet, but in this case, you purposely bolster their argument and then explore what their strongest version of the argument will look like. An interesting post here details the process we can put into practice, but essentially this will allow the two parties in conversation to be together on the same page to explore nuances. Who knows, maybe you’ll end up agreeing with the big-pharma-bad part and the difficulties in capitalism in general, at which point is this at all about vaccines? and off we go to the anti-capitalism march! Or maybe there will be some nuances: well not all projects are bad I guess, especially if particular projects are non-profit. Either way, exploring further into the why behind the misinformed conclusion, can lead to constructive exploration, leading ultimately to an undated conclusion.

Of course, there were a lot more tips that we covered in the workshop, but I personally think it boils down to our intent. Think about why we want to persuade them. Most likely, we care about them to some degree. Then show that. TELL THEM THAT. In fact, I’d argue that if your intent to “debate” is just to show off your “smartness” (not even for the knowledge’s sake but especially to “put them down”) perhaps you should not be engaging this in the first place. Perhaps, we need to first ask ourselves about the good faith principle – and if we fail this litmus test, we may do more harm than good.

Some final thoughts and sharpening your critical thinking with philosophical razors

Overall, this was a very uniquely CUSAP workshop – not just because of the theme around pseudoscience, but because it reflects our will to improve as a empathic communicator. It is perhaps far too common amongst academic circles to be lost in the pleasure of the logical precision and vast amount of facts we can present about a topic during a debate. This may be an excellent tactic in academic debates, but our intellect should also realize that to change the hearts of others, we can’t just end in such “self-indulgence”. While I say this, it’s not an easy feat (I’m most definitely a beginner in the techniques mentioned above). Besides, we are people too, and when topics are touchy as pseudoscience, it is fair to acknowledge that it might hurt our feelings just a bit too – being denied of our daily efforts in pursuit of science -.

Furthermore, just because you are a student of STEM doesn’t automatically immunize us to pseudoscience too. Even before debating others in an attempt to persuade, our own logic and perception should be also challenged (albeit, as a CUSAP member I do recommend standing firmly on the ground that we are convinced to be scientific, and acknowledge that not absolutely everything is a matter of perspective). In an attempt to persuade, is our explanation getting overly convoluted? Shave it down with your trusty Occam’s razor! Did we poorly phrase how we think things “ought” to happen, by making a sudden leap from what IS taking place based on our preconceived notions and expectations? Even if this is from good intentions, philosophically these two notions should be separated with Hume’s guillotine!

At this rate, it feels like swiss-army-full of these philosophical razors will sharpen your critical thinking to make a perfect debate case, but don’t get yourself too carried away. In the context of pseudoscience, it’s tempting to pull out Alder’s razor, which claims that if something cannot be experimentally tested or through observations, then it is not worthy of debate. Especially in many pseudoscience topics, this might be the kneejerk reaction you will gravitate to (you know that it is a waste of time to even contemplate Earth is flat!). But this razor has its cavitate where this philosophy itself is not “scientifically validatable” and at which points it puts the whole premise under a strain. This is where we might have to dial down on this whole dismissive attitude toward all “unprovable” theories (and hence how our science is right), but rather focus on the theories that can be proven wrong using Popper’s razor (also known as the Popper’s falsifiability principle). While this will allow us to present the best scientific theory that is not currently falsified, should we be debating “theories” so fundamentally fantastical with equal priority as those at least partially based on scientific reality? We can use Sagan’s razor (aka Sagan’s standard) to cut down such extraordinary claims that are not supported with extraordinary evidence, to ultimately pick our games.

There are still more of these philosophical razors, but at the end of the day whether these concepts help us with critical thinking or the debate tactics we learned in the session are all something that can’t be done easily without practice. Sometimes the conversation will get emotional, and sometimes the conversation happens at the wrong moment when things are very personal. While practicing these skills on the go out in public, therefore, might seem scary, I think CUSAP has cultivated a safe space for all of us – united with the goal of betterment – to explore and discuss our ideas. Putting this knowledge to the test and practicing our “art of compassionate debate”. 

If this kind of science philosophy is your spiel or you just want to boost your science communication skills (and trust me, we hold a lot of casual, at-home vibe sessions!), consider joining us at the CUSAP mailing list or Instagram. Come say hi at one of the upcoming events if you can make it 🙂

Sources and Recommended Reading:

  • Most sources are hyperlinked in the text but here are some few key picks:

The Tuskegee Syphilis Study: How “science” betrayed us & haunt back 50 years later.

Author: Maya Lopez (Blog Chief Editor)

The year is 1966. A 29-year-old American epidemiologist, Peter Buxtun filed an official protest with the Service’s Division of Venereal Diseases. He overheard a conversation with his colleague about a man whose family traveled a long distance to see a doctor, away from their hometown. The man was diagnosed with tertiary syphilis – a later stage of infection that damages the central nervous system – and was given a shot of penicillin. However, when the Public Health Service heard of this treatment, they were enraged as the unknowing doctor treated a “research subject” – from the Tuskegee Study -.

The epidemiologist found this rather… strange – why would you NOT treat a clearly ill individual at his later stage of symptoms when you have an effective treatment? After reviewing nearly 10 roundup reports, he realized the horrors of the supposed  “research”.

“I didn’t want to believe it. This was the Public Health Service. We didn’t do things like that.” 1

He knew that something had to be done. He consulted the literature on German war crime proceedings and the Nuremberg code and wrote a report comparing the CDC work to that of Nazis. This didn’t fly well with his bosses – seen as merely a complaint from an “errant employee” and shortly dismissed on the basis that the “volunteers” had full freedom to leave whenever they wished and this “serious work” was not complete. But isn’t the problem the fact that such a study is taking place? November 1968, a few months after the assassination of Martin Luther King Jr., Buxtun filed another protest. This is no longer just an ethical critique within epidemiology. This harbored political volatility. For the first time, officials realized how this case may have severe political repercussions, but once again, the claim was rejected.

So in 1972, Buxtun took a step beyond internal protest and became the whistleblower, bringing the story to Jean Heller of the Associated Press. The expose of the study was first out in the Washington Star and by the following day, it made its way to the front cover of the New York Times.  And this was how one of the most notorious, modern medical research finally came to an end – after 40 years of deception and harming of unknowing participants. Incidentally, this can be seen in light of the “self-rectifying” capabilities of science – the nightmarish research conducted in the name of science was rejected and ultimately taken down by a scientist.  And yet, in 50 years, this very pseudoscience that was rejected was resurrected, only to fuel more pseudoscience….

The Tuskegee Syphilis Study

So what exactly took place in Nazi-like research atrocity that was conducted in the name of medical science?  Tuskeegee is a city with a university in Macon County, Alabama, USA. The study started in 1932 and consisted of 399 Black men, who already had contracted syphilis and 201 of those who had not contracted it yet. However, these “volunteers” were not told what they were signing up for. In the midst of the Great Depression when medical care was already hard to obtain in this rural region, the Public Health Service and CDC in cooperation with local academic and medical authorities like Tuskegee Institute (now known as Tuskegee University) advertised a social medical roll-out program with following benefits: free physical examinations accompanied with rides to and forth the clinics, hot meals on examination days, and treatment for some minor ailments and injuries. It also had an added guarantee that their family would get recompensation if they agreed to an autopsy of the body after a series of medical studies.

This is particularly compelling in the context of the Great Depression and how male breadwinners have additional incentives to help with family financially. But for those who wondered if this could be too good to be true, there was added encouragement from the already trusted local authorities within the community (like pastors, black doctors, etc), emphasizing that you’d be considered lucky to join such“special medical program” to better their lives and the field of medicine, and convinced the participants that it is worth their time. So the men never agreed to any “syphilis study” at all.  People participated and agreed to free body checks and collection of data based on trust; the official authoritative figures in lab coats, doctors, and medical practices.

However, the doctors that they entrusted did not provide effective treatment even once the symptoms started to develop. This was despite scientific evidence of penicillin as an effective syphilis remedy, within the first decade of the study. In fact, participants were often not told if they had syphilis as doctors only commented that they had “bad blood” – which meant various daily malignities not limited to syphilis. But surely, once your untreated syphilis is worsening, you and your family might seek help?  Like second opinion or treatment? In some accounts, it is revealed that doctors were told that the participants would lose all “benefits” from the study if they sought after treatment, showing that the study had no intention to cure these men, EVER. By the way, this study, colloquially known as the “Tuskegee Syphilis Study”, comes with a full title: “Tuskegee Study of Untreated Syphilis in the Negro Male”.

From its inception, this study was structured as a funded, onslaught of slow death, to propel the shaky theories and the hypothesis it is based on. It’s arguably funded pseudoscience research that is riddled with death and harming the lives of many to be born. During the course of 40 years, 28 confirmed and possibly 100+ men died as a direct result of untreated syphilis from this study. Furthermore, it led to many more infections as a result of untreated participants, including children born with congenital syphilis.

The Pseudoscience of Tuskegee Study and Our Critical Thinking

Naturally, upon the leak of the Tuskegee Study to the public by the whistleblower Buxtun, there was a massive public outrage leading to the termination of the experiment. However, we need to acknowledge that the field (including those not directly involved), didn’t all just immediately say: “Yup, this is messed up; that was bad of us.” Some scientists and the medical community attempted to “still see value” in this or even outright defend it:

“There was nothing in the experiment that was unethical or unscientific. – Dr. John Heller (assistant in charge of on-site medical operations) 2” 

Now, many, many unethical measures are needless to be said – especially with the blatantly obvious premises and clear willingness to MAKE SURE all participants suffer a slow death due to the “autopsy-focused” nature of the study. However, the other half of Heller’s claim of this statement is also completely wrong. This study was EXTREMELY unscientific.

Starting with the premises of the experiment, the conception of the study was fundamentally founded on race science belief that was based on pure racism. It was believed that syphilis would manifest differently depending on your race.  African Americans were thought to show more damage to cardiovascular systems than to their central nervous systems, which was where the manifestation of the whites who they believed to have their “superior” brains.  Additionally, the experimental design is also flawed and the two study groups were not strictly maintained. For example, individuals starting off as syphilis-negative were kept in that group even if they later contracted and tested positive – ruining the integrity of the control group. Furthermore, we often discuss in science how new research addresses the knowledge gap, ie, it should contribute to some unanswered question or provide novel data to the body of knowledge. However, in this regard, the Tuskegee Experiment also fails to be “scientific”: it really adds nothing to the body of knowledge on syphilis. Syphilis is -and has been even back then – a well-documented disease. There are many historical records of what untreated syphilis looks like across time and place.  Multiple waves of infection surged nations across history before the establishment of useful treatments, and these outbreaks seemingly left a culturally ingrained record of how nasty this disease becomes when untreated. In Europe, there are numerous medical paintings well depicting such “untreated” syphilis (Google Image at your own discretion pleae). In Edo-era Japan, a poem talks of the nightjars (then, being a code for illegal prostitutes) “lacking nose”:

“鷹の名にお花お千代はきついこと”

To the Nightjars, ohana-ochiyo (pun with words sounding like falling nose) is indeed harsh3

 referring to the notorious terminal symptoms where noses fall off.  It is clear that to the eyes of those who conceived this study, this was really a sick “passion project” – a morbid curiosity, entranced by autopsies and examining the different ways that the black bodies succumb to the disease, hoping to find a difference in the manifestation of the disease for the “whites”.

While this is an extreme case of how clear the lack of scientific bases is, this study leaves an important message to all modern scientific thinkers – critical thinking. This may sound obvious but given that science (especially nowadays more than before) is a group endeavor and often institutionalized, being critical at all times requires not just checking your own bias on the research topic, but also when you are told what to do. During the days of Buxtun, disagreeing with the senior was seen as a much more unacceptable practice. However, the whistleblower facing a brutal punishment is a phenomenon far from over. In 2010, a metastudy of 216 corporate fraud is said to have identified over 80% of named employees who reported the fraud faced some sort of punishment including being fired. This kind of reaction unfortunately is not reserved for private institutions. Another account found that amongst the nurses who reported misconduct, all of them had suffered from some level of informal retaliation (including ostracism and pressure to quit), nearly 30% faced more formal reprimands, and 10% were suggested to seek psychiatrists. The message is clear – whistleblowers despite the public image of a heroic spotlight, face the danger of personal and professional stakes to this day, which only exacerbates in a work environment where proper legal protection may be lacking.

Buxtun, while not the only one who protested in the long course of study, took an exceptionally courageous step of not letting this issue die – despite all the rejections of his protests. More broadly, this is a testament to continuous critical thinking that should be required of all scientists to prevent a top-down approach to study. If anything, science should have its roots in not being afraid to point out that something is wrong.  The field-defining nature IS the falsificationism. This constant questioning should extend beyond the factual premise that lies as a foundation of your hypothesis under investigation, but also to the motive and the implication of the study. What question will this study answer? What is this for? And At what cost? Naturally, the Tuskegee study paved the way for numerous ethical guidelines that are legally binding including the modern practice of informed consent. While these legal reforms to protect whistleblowers do take place in later decades (Whistleblower Protection Act of 1989) for example, ultimately it cannot truly encourage people to step out of bystander, unless we address the research culture which is an uphill battle for the whistleblowers currently. Hence, researchers must be conscious of and vigilant on these matters across the board.

Beyond the History Textbook: When the Pseudoscience Revives to Haunt Us Today

I could have finished my writing here with a neat lesson-of-the-day, except in recent years, this study has resurfaced but with malignant intent.  While this brutal history forever remains accessible to the public, ranging from President Clinton’s Apology to educational content in Black History, my personal impression is that more discussion should’ve been made FROM the science community. Sure, maybe there are “enough” retailing in terms of science history, US history, politics, etc, but this had a catastrophic impact on the trust of populational medical research and servers as a primary example in the field that highlights the importance of informed consent and whistleblowing (though, note that consent was established as a prerequisite far before the end of the Tuskegee Study). So many measures now exist to make such atrocities near-impossible.  While this may have been not intentional, the lack of such proactive discussion from the science community of all levels perhaps buried this away as trivia for those “in the know”. Retrospectively, this may be one of the failures of our community – to understand and effectively address the underlying distrust of institutionalized science and authority by individuals from a marginalized community that has such a history of exploitation.  

In 2021, when COVID-19 vaccines were under way, a documentary-styled video production called Medical Racism: The New Apartheid, was distributed online via anti-vaccination group Children’s Health Defense, laced with conspiracy theories about newly made vaccines. The central assertion of the video is said to be that: the US wants to harm minorities with vaccines. They supposedly talk about the classical anti-vaccine evidence regarding the “link” between autism and the MMR vaccine, but they allege the COVID-19 vaccination efforts a secret experiments on the African American and Latin communities specifically. And to illustrate their point, they refer to the Tuskegee Study.

Yes, the Tuskegee experiment was real, and as Clinton puts it, it was “deeply, profoundly, morally wrong”. However, what we see here, is an exploitation of a truth to pursue a malicious motive. A racially targeted propaganda ONCE AGAIN, to hurt the minority community that is relatively more vulnerable by proactively dissuading them from accessing health care. To me, this is what makes this particular exploitation OF Tuskegee Study evil: it tarnishes the very thing that we should have taken as a lesson. The study happened because of its context which added up like a perfect storm: the community was already a medical care desert in the midst of an extreme economic crisis, fuelled further by racism on the side of the researcher. It’s only natural that such betrayal by “the science”, by the healthcare providers leads to more distrust and therefore this forms a negative spiral of real scientific healthcare not reaching out to them. Despite this recognition of the danger of losing trust, here we are this very narrative being regurgitated not to promote the accessibility of medical care nor transparency of science and research, but to dissuade people from excluding themselves from science as a whole.

The video piece supposedly further assures their target audience why they will not need the “supposed” immune boost from the vaccine by performing mental gymnastics: claiming that African Americans are “naturally immune from COVID-19” and that vitamin D will protect you from the disease. The irony of the production doesn’t end here but by how they are presenting the subject as well. The message is clear: viewers should not trust the medical authorities (the CDC and big pharma), but you should totally trust our information because look! Kennedy with all his “authoritativeness” is gracing us the serious advice.  While it is hard to measure the effect of this propaganda given that the video itself became unavailable shortly after its release on SM platforms, the impact it may have had to contribute toward radicalization and vaccine-hesitancy (which was apparently higher in minority communities) is easily imaginable. In a sense, it is “clever” production that dissuades the trust of mRNA vaccine by actually presenting no evidence that vaccines are unsafe, but it successfully taps into the larger historical trauma – fear-baiting on their memory of deception deeply ingrained in the community.

Unfortunately, there is perhaps no quick fix to trust that is once lost. After all, it is understandable self-defense skepticism in a population that was traumatized in such a manner. However, as the new medical frontiers are being explored and studied, these must be with the intention to deliver – to as many people as possible. If so, the science community must not let the brutality of historical trauma haunt us in the present, to be further exploited to contribute to the inequality of healthcare in the future. Rather, the past needs to be told openly to prevent such exploitation, and show a proactive stance that we have and will continue to evolve – as apology alone is a mere word, that will not earn us back the trust that is lost amongst some. These accounts of history don’t need to be for fear-mongering as well. The acknowledgment of what has happened in the past should also be in conjunction with how we have changed. For example, in the US, the National Research Act – a federal law on ethical guidelines for human medical research passed in 1974, and many institutions have added measures regarding the race and ethnicity of protected groups in studies. Policies and guidelines are often not the sexiest conversation science can offer. But for constructive conversation, these “follow-ups” on the old news highlighting what we have learned, and what we are doing now should be proactively communicated with compassion and acknowledgment of the historical misconducts. Hopefully, such conversation (including the uncomfortable introspection into the darker chapters in history) can lend to the promotion of our consistency, reliability, and our compassion – that we care -, to further the medical frontier to as many people as possible.

We will be hosting a screening event of this anti-vaccine film by Children’s Health Defense on upcoming Saturday (Feb. 22nd) at Queens. Join us there to explore and analyze how such misinformation and pseudoscience was being spread, and further discussion with the CUSAP team! Thank you for reading this rather lengthy journey of history and see you at the event!

Sources and recommended reading:

Some excellent reading on Buxton and whistleblowing behind this study:

  1. https://web.archive.org/web/20200519123834/http://advocatesaz.org/2012/11/15/i-didnt-want-to-believe-it-lessons-from-tuskegee-40-years-later/ ↩︎
  2. https://www.nytimes.com/1972/07/28/archives/exchief-defends-syphilis-project-says-alabama-plan-was-not.html ↩︎
  3. This is self-translated. I’m no poetry major sorry. ↩︎

More in general about the Tuskegee Syphilis Study:

https://thelancet.com/journals/laninf/article/PIIS1473-3099(05)01286-7/fulltext

https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study

https://thispodcastwillkillyou.com/2019/10/29/episode-36-shades-of-syphilis/ (amazing podcast of two STEM experts talking about disease)

https://www.history.com/news/the-infamous-40-year-tuskegee-study

The Scientific Method Is Western Biased?

Author1: Isha Harris(Co-President)

This week we had a lecture titled Decolonising Global Health. The premise was that modern medicine is a colonial artefact, as is science as a whole. Western philosophers pioneered ideas of objectivism and empiricism, and in doing so marginalised traditional and indigenous forms of knowledge. The lecturer explained how coloniality of knowledge is wielded by the descendents of colonisers, and we are facing an epistemicide as indigenous and alternative forms of knowledge are discredited. The lecturer described medicine as the most cruel outcome of the colonisation of knowledge, and expressed regret that medical knowledge is centred around empiricism and positivism. She then posited some solutions for medical schools, including: 

  • Consider indigenous and traditional healing practices as equal or superior to western medicine
  • Question the use of hospitals, suggesting we don’t need to practise medicine in these colonial settings
  • Accept degrees in traditional medicine as valid accreditation to practise medicine in all parts of the world
  • Use dreams as a diagnostic tool

I will preface by saying that I absolutely think science has a murky, colonial past, and lot to answer for throughout history. Exploitation and unethical experimentation have been all too common, and it’s quite unsurprising that science has lost the trust of many minority groups. Some particular examples include Henrietta Lacks, the namesake of HeLa cells, James Marion Sims’ gynaecology experiments on enslaved black women, and Nazi experiments during the holocaust. Science is not perfect, and we are right to continue questioning its practice to this day. At least, science as an institution. 

But science as a method is independent of the shortcomings of the past. Science is simply the process of testing your ideas by observing the world, and updating them when the evidence conflicts. We are all scientists when we look at the window and decide whether to bring an umbrella; when we smell milk to check if it’s gone off. 

Scientific thinking has existed, in all parts of the world, for millennia. If a cavemen was told that this patch of berries is safe to eat, they’d sure as hell want you to give you some evidence before eating them. If I claim that your spouse is cheating on you, you’re going to ask me to prove it, whether you live in the West or East, now or in 3000 BC. It just so happened that it was the Western Enlightenment philosophers who formalised it using big words like hypothesis.

In my view, to claim that the scientific method is western biased is intensely patronising, and actually quite racist. It is to say that non-Westerners simply cannot comprehend the concept of testing your ideas with evidence, that they were left behind after the enlightenment, stuck in the dark ages of folklore and conspiracy. And this is absolutely not true! The developing world does some incredible science, and is really leading the way in certain fields such as palaeontology, infectious disease, and epidemiology. Africa for example boasts some pretty impressive institutions, such as the KEMRI-Wellcome Trust, and the Evolutionary Studies Institute. I wonder what the great scientists working in these faculties would make of my lecturer’s great claim? 

More broadly, the concept of ‘alternative forms of knowledge’ as a whole just seems vastly unserious. Epistemological relativism, where knowledge is seen as context dependent and equally valid regardless of empirical support, is ridiculous at best, and dangerous at worst. It threatens the great progress that the developing world is seeing, thanks to improved technology, agriculture and healthcare. It risks romanticising cruel practices, such as female genital mutilation, simply by virtue of them being indigenous. And to teach Cambridge medical students that these forms of knowledge are equivalent to evidence based practice is dangerous. If the students took this lecture seriously, we wouldn’t question a patient when they opted for herbal remedies over chemotherapy. Because all forms of knowledge are equivalent, right? Surely the cancer cells understand that!

All knowledge should be critically assessed, and subject to strict empirical standards. As it stands, science has the most demonstrable impact on wellbeing, so it is the knowledge-form we opt for. Of course, if I am presented with some new evidence showing that an alternative way of knowing in fact leads to a more effective discovery of the truth, then I will gladly reconsider my position, like any good scientist. But until that day, let us spread the wonders of modern science far and wide, uplift the developing world to enjoy its rich rewards, and support their great efforts in practising it themselves. 

  1. This article was originally posted on Co-President’s personal blog and adapted for publication here for CUSAP. ↩︎

Unpacking Immigration Misinformation in The 2024 Elections: Claims, Facts, and Psychological Influence

Author: Leila Yukou Lai (Speakers and Academics Officer)

During the 2024 elections in both the UK and the US, immigration emerged as a prominent issue in political campaigns. Figures like Farage claimed that, 

Mass immigration is making Britain poorer……. half of the immigrants coming to the UK are dependents who do not work” 

We need to prepare for Channel migrant ‘invasion’ from countries ‘with terrorism, gang culture and war zones

Immigrants are causing divisions in communities and have made the UK unrecognisable

Similarly, Trump’s campaigns included assertions such as,

We have more terrorists coming into our country now than we’ve ever had – ever in history, and this is a bad thing. We have thousands of terrorists coming into our country” 

They are eating the cats and dogs

They are taking away your jobs” 

Some of these statements are partial truths, while others are false information. This article will fact-check the prominent immigration-related claims from the 2024 elections. We will examine how political campaigns leverage concerns like economic threats, national security fears, and cultural anxieties to create sensationalised perceptions of immigration that shape public discourse in ways often misaligned with the underlying realities of the issues. Additionally, we will examine the psychological roots and impacts of immigration narratives.

We will further discuss practical strategies for addressing and countering such narratives in everyday life in our Feb 4th Event, so please register to join if it interests you.

Fact-Checking Prominent Claims & The Psychological Roots

Economic Threats

ClaimFacts
“Mass immigration is making Britain poor……half of those that have come aren’t coming to work, they’re coming as dependants”FarageThe former part of the claim can be debunked by research led by Professor Dustmann from UCL, which found that immigrants to the UK, particularly those from the EEA and post-2004 EU accession states, made significant positive fiscal contributions, with EEA immigrants contributing 34% more in taxes than they received in benefits and recent EU immigrants adding £20 billion to the public purse. In contrast, UK natives’ tax payments fell 11% short of the benefits they received, resulting in a net cost of £617 billion.
The latter part of the claim is partially accurate. The inaccuracy lies in the overall visa statistics, as only one-third of visas issued (all types) in the most recent reporting period were for dependents. However, regarding work visas specifically, he is almost correct—43% were dependents. Nevertheless, he omitted the fact that these dependents are ineligible for benefits but allowed to work, positioning them to potentially contribute to the economy rather than becoming a burden which he falsely implies.
“Immigrants are taking away your jobs”TrumpThis claim can be debunked by insights from Economics research and experts. 
Firstly, economists from the Brookings Institution suggest that immigrants often fill labor-intensive positions, such as gutting fish or working in farm fields, which are typically shunned by native-born workers. This suggests that immigrants are not necessarily competing for the same jobs as the majority of American workers. 
In addition, analyses from the US National Bureau of Economic Research (2024) reports that immigration does not significantly drive down wages for American workers overall.
Building on this, it’s noteworthy that, although immigrants represent around 15 percent of the general U.S. workforce, they account for about a quarter of the country’s entrepreneurs and inventors, according to Harvard Business Review. By creating new businesses and innovations, immigrants contribute to job creation and economic growth, further undermining the notion that they simply displace American workers.

National Security Threats

ClaimFacts
“We have more terrorists coming into our country now than we’ve ever had – ever in history, and this is a bad thing. We have thousands of terrorists coming into our country” TrumpThis claim implies more terrorists have entered the US under the Biden ministration, which is misleading. 
Data from U.S Department of Homeland Security indicates that the actual number of individuals on the terrorist watchlist caught at the border is in the hundreds (139 at the southern border and 283 at the northern border as of July 2023), not the thousands as Trump claimed
Furthermore, since the 2021 fiscal year (the beginning of the Biden administration), the number of individuals on the U.S. government’s terrorist watchlist apprehended at the borders has increased each year. This trend indicates that border screening measures have become more rigorous, rather than more lenient as Trump suggested.
“We need to prepare for Channel migrant ‘invasion’ from countries ‘with terrorism, gang culture and war zones”FarageWhile it is true that the top foreign nationals involved in UK terror-related offenses from 2002 to 2021 were people from Algeria, Iraq, Pakistan, Iran, Afghanistan, Turkey, Somalia, India, and Sri Lanka, it is important to note that these offenses represent a small fraction of their respective communities in the UK. 
In the year ending 30 September 2024, the highest number of terrorist crimes were still conducted by UK nationals and those who are ethnically White, according to data from the Home Office.
Research published in the British Journal of Political Science shows there is little evidence indicating more migration unconditionally leads to more terrorist activity, especially in Western countries.

Cultural Anxieties

ClaimFacts
“Immigrants are causing divisions in communities and have made the UK unrecognisable”FarageConcerns about cultural identity are rather subjective and difficult to address purely with data. 
However, Farage’s claim can still be challenged, a review conducted by the Centre on Migration, Policy and Society at the University of Oxford, which concludes that higher ethnic diversity in UK localities does not consistently correlate with higher social tension. Instead, local economic factors (e.g., unemployment, funding for public services) are more predictive of community conflict. 
Therefore, this claim of immigration undermining social cohesion lacks credibility. 
“In Springfields, they are eating the cats and dogs”TrumpThis claim was fostered by a comment made by Trump’s running mate, JD Vance, who later admitted in an interview with CNN that he was willing to “create stories” to get his message across.
According to state officials from Ohio, even Republican leader Mike Dewine, there is no credible evidence to support the rumor that immigrants in Springfield, Ohio, are stealing or eating pets. Local law enforcement and animal control records do not reflect any such incidents, and no verified reports exist.

Sensationalised Language, Psychological Impact of Immigration Narratives

Having clarified the relevant facts, let’s now examine the linguistic choice employed by conservative leaders in their claims about immigration. Even if some of their claims are partially correct, it is undeniable that the statements are highly sensationalised and crafted to elicit strong emotional responses. This dynamic was evident in Ohio, where baseless allegations about the Haitian community in Springfield eating pets triggered public panic and a wave of hoax bomb threats. Similarly, in England, false narratives claiming that an asylum seeker was the perpetrator of a knife attack in Southport, though not directly linked to Farage’s claims, led to widespread riots spanning from Plymouth to Sunderland.

One reason such rhetoric remains effective is its reliance on several psychological phenomena, including in-group/out-group biases and the negativity bias. For instance, using language like “invasion”, Farage portrays migrants as an external force poised to disrupt national order, framing the situation in a way that elicits anxiety and heightens threat perception. This framing aligns with Social Identity Theory, whereby the in-group (domestic population) feels compelled to defend itself against the out-group (immigrants). Similarly, when Trump claims immigrants are “taking away your jobs” or there are “thousands of terrorists coming into our country”, he is tapping into the negativity bias which refers to the human tendency to pay more attention to, and be more influenced by, negative or threatening information than by neutral or positive details. These emotional depictions overshadow data indicating, for instance, the benefits that immigration brings to local economies or that instances of immigrant-linked terrorism are statistically rare.

In addition, repeated exposure to a single narrative can increase people’s belief in its accuracy, even when that information is demonstrably false. Therefore, simply by the virtue of repetition, political campaigns can embed the same message into public consciousness without necessarily adhering to factual accuracy. As a result, it is challenging for data-driven clarifications about immigration to break through the emotional impact of sensational rhetoric. Nonetheless, recognising these psychological levers is a crucial step toward fostering more nuanced, evidence-based discussions on immigration, rather than allowing panic and misinformation to drive policy and public sentiment.

Susceptibility to Immigration Misinformation

Research suggests that individuals with higher levels of ethnic moral disengagement are more likely to believe in racial hoaxes. Moral disengagement occurs when an individual justifies or rationalises harmful beliefs or behaviours, often by dehumanising out-groups or reframing actions as morally acceptable. This cognitive process allows individuals to convince themselves that commonly accepted ethical standards don’t apply to them, hindering their empathetic capacity, especially toward marginalised groups. Such tendencies are often linked to authoritarian worldviews, which favor strict hierarchies and resist social change, making these individuals particularly susceptible to immigration misinformation. 

Our speakers for the upcoming event, Dr. Tessa Buchanan and Malia Marks, have both conducted research on the relationship between authoritarian tendencies and susceptibility to immigration misinformation, and they will share their findings with us further at the event. Their insights will not only shed light on the psychological dynamics of misinformation but also equip us with tools to critically assess narratives surrounding immigration. We invite you to join us on Feb 5th at the Queen’s College, Cambridge for a fruitful discussion. 

CUSAP After Hours: Pseudoscience – the Science through the Looking Glass

Author: Maya Lopez (Blog Chief Editor)

One way to describe pseudoscience is perhaps a distorted science.  In some way or another, they present features that feel scientific, even to an exaggerated level.  However, as the mirror separates the un-melding reality and its reflection, such distinction can be made between science and its reflection – pseudoscience.  Today we will explore such philosophical grounds to prevent pseudoscience from infiltrating reality in this post-truth day and age. To this end, we will look into the original philosophical definitions and key takeaways I got from a special lecture – our first official event of this academic year – by Prof. Hasok Chang. He has also been supporting us as Senior Treasurer from the inception of our society.  The lecture posed a deceivingly simple but surprising philosophical question: What is Pseudoscience? And the clue to this question lies about 100 years ago, on May 29, 1919. 

Arthur Eddington, the director of Cambridge Observatory led an expedition to observe the solar eclipse to test the theory of relativity. Eddington and Dyson positioned themselves at Principe Island, while another expedition team with Crommelin and Davidson was in Brazil. Together they were challenging Einstein’s Theory of Relativity.  If it was correct and the light is indeed bent by gravity, stars in the Taurus constellation would be visible during the eclipse, positioned differently compared to the calculations by Newtonian physics. Together, they made the observation that marks history: stars aligned with Einstein’s theory.

However, this was not the only history that was being made on this day. This story influenced a young 17-year-old in Vienna, and to quote himself “(it) had a lasting influence on my intellectual development.” Karl Propper would eventually become the philosopher who explored the philosophical distinction between knowledge based on science and what he coined as pseudoscience.

But first, what is science?

To unravel the question of pseudoscience, we first had to test our own perception of science itself. This may seem straightforward, based on what we learned about the scientific method in middle school.  But what is science, really? More specifically in Popper’s words:

When should a theory be ranked as scientific?”

Surprisingly, this was a question that modern philosophers had never fully characterized up until Popper. While characterizing vast and wide “science” itself may be difficult, we can find the constituents of things we deem scientific.

So let’s lay some on the table. Many aspects of what we often perceive as scientific might be a little something like: quantitative, explanatory, and empirical. However, all these aspects are in fact well conserved in other domains that we also could define as – in Popper’s terms – pseudoscience (or at least non-hard/natural science). For example, consider the “social science” domain like Economics. Modern Economics also uses models that can be expressed in equations, presenting itself as more heavily mathematical than perhaps some of the medical sciences. Now let’s consider the next aspect on the list: explanatory. Conspiracy theories are no match in terms of explaining things – the icebergs can get so deep and it’s bulletproofed against all counterarguments, because see?  Interpret it this way and it’s just another proof that absolutely everything ties back to the Deep State! But then, at least hard science is the most empirical approach to knowledge… right?

Objective and empirical data gathered from replicable experiments… but words like “objective” and “replicable” are not as clear-cut as one may hope. After all, to those who believe in pseudoscientific theories such as Flat Earthers, their experiences are indeed genuine replicable “experiments” that can be tallied and quantified objectively. So, these typical “qualities of science” like empirical-ness are unfortunately a matter of perspective and not a uniquely defining factor. To further complicate the matter, such a struggle for replicability is also in the world of actual sciences. Major natural science publications have been noting the “replicability crisis” for years, and it is important to acknowledge that even with a genuine intention of scientific experiment, small differences in interpretation and methods can pave the way to this replicability crisis.

…So by now, if you are feeling more confused about what science than before we started, fret not because this is very confusing. According to Prof. Chang, many undergrads facing this question in the HPS course will very often fall into the same pitfalls of presumed scientific qualities as those listed earlier. When pseudoscience at its core seems to mirror – and sometimes “outperforms” in – so many key qualities of science, and when in fact some cases of “pseudoscientific” theories turned out to be the basis of the next science and vice versa, you can’t help but wonder:  Is there no way to identify science from the pseudoscience?  While it is indeed not a clear-cut line to separate the two, luckily there is a philosophical distinction to be made.  

Falsifiability and Popper’s observation on “predicting” the present vs. predicting the future

Back to Popper in Vienna, the early 1900s was an unprecedented epoch in the field of knowledge. A new realm of physics was being proposed by Einstein. Historical Materialism was proposed by Karl Marx as a new view on history, and the relationship of observed human behavior as a phenomenon is now seemingly explainable by Sigmund Freud’s Psycho-analysis. At the time, these frontiers of knowledge were arguably treated with equal weight as new theories that pushed the boundaries of the scientific mainstream. However, Popper concluded that not all “scientific achievement” was made equally depending on whether the method of obtaining the knowledge was to confirm an existing belief OR to actively attempt to disprove the unreality of the null hypotheses.

Let’s unpack that concept following the footsteps of Popper himself. Popper used to take the lectures by Einstein and was studying the psychoanalytic of Freud. One day, he asked Alfred Adler himself (an ex-colleague of Freud who eventually formed his own theory of individual psychology) about a case report that was seemingly not… Adlerian. Yet when asked, Adler easily explained the case using his theory, leaving no discrepancy. Given that Adler has never analyzed the individual of the case, Popper asked why he could be so sure to which Adler replied that his thousands of former experiences support this theory.

And this, was ultimately what distinguished Einstein’s theory to be the only truly scientific theory amongst the list earlier. Adlerian theory in this case was used to “predict” why the behavior had/has been taking place and it is so hard to “disprove” a retrospective interpretation.  Similar criticism can also be applied to (in?)famous Freudian psychology.  While still regarded as the foundations of modern psychology, it hasn’t aged too well when it comes to how “scientific” his theories are where every feminine intimacy issue is seemingly rooted in male genitalia envy and daddy issues.  They explain why certain psychological phenomena happened so well by encapsulating them into logically constructed theoretical frameworks (although even then, it did cause disagreements, leading colleagues like Adler and Jung to depart).  While such Freudian theories gave us the satisfaction of feeling like we understand why things are – this is still a retrospective explanation of a past – an explanation behind the known result. They didn’t, however, do as well when it comes to predicting unseen future behaviors – which often scientific hypothesis aims to do. This importance of how you verify scientific claims is perhaps akin to association and causation in modern medical science.  Even if substance X is correlated with say cancer, you can only scientifically show that it causes cancer if you set up an experiment where you put that substance – in say mice or cell culture – to test your prediction on “unseen future” by seeing if it does form cancer or not. While you could always reinterpret past events (or interpret dreams in Freud’s case) to support your theory, if the cancer didn’t form or the stars didn’t align in 1919, the scientific theory is simply not working.

Furthermore, Popper brings in a new approach to this philosophical limitation of verificationism – falsificationism. In brief, Popper’s realization of the distinctively “scientific” approach to a theory by Einstein, led to philosophical conclusions that focus on how science actively disproves future possibilities to approach truth while pseudoscience (as in nature of the knowledge rather than modern conspiratory connotation per se) focuses on proving and explaining past phenomena.  Following is the excerpt of his writing from 1962 outlining this concept in detail: 

  1. It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
  2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
  3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is
  4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
  5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testabilty: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
  6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. …
  7. Some genuinely testable theories, when found to be false, are still upheld by their admirers—for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later [after 1920 –NS] described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.) 

This was a revolutionary approach to thinking: only by interrogating your hypothesis by exploring other opposing possibilities, you can be confident about your theory. Furthermore, this lends to the idea that one day, when we do get an observation that contradicts our model, our standing theory will be then proven wrong.  However, Popper believed that every falsified theory is “good” because we were able to rule out a possibility, putting us closer to the truth. This is perhaps the beauty and the complexity of the scientific method: it’s a long, iterative process of elimination, but we get closer to understanding the world, one step at a time.

Being comfortable with uncertainty

While now we have some philosophical grounds to determine which knowledge is more scientific than the other, this inevitably opened up another eerie reality: uncertainty. Scientifically, you can be confident that something is most likely a correct representation of the truth – but this is far from the solid explanatory fact -THE truth! – we all crave. After all, being confident about a theory still leaves room for the probability of exceptions. Science is inherently uncertain.

Nevertheless, the volatility of “truth”, is arguably the very appeal for aspiring scientists. You are off to an adventure of the unknown and you are using any tools to best encapsulate the mysterious nature lying in front of you. While this is a romanticized view from a reader of science myself, I can easily imagine that from anyone outside of the adventure crew, we can appear as a bumbling mess at times or worse, untrustworthy contradictorians and a hypocrite.

Many of the modern scientific achievements progressed through the process of falsification. This ultimately lends itself to denying at least part of the former discoveries, and thus individual reports across a timeline may appear contradictory. Even if this is inevitable and the reality is often more nuanced than the black-and-white “contradiction”, we must admit: flip-flopping on conclusion appears unprincipled and possibly dangerous for fields that directly affect life like medicine. Imagine if you asked a friend for advice.  This person says one thing but then next month, does a complete 180 and even stack up evidence on why you were not supposed to do that. The explanation is certainly factual and logical, but the emotional appeal on whether you can trust this guy is put on a pedestal because it ultimately makes YOU responsible – requiring you to put in the work with a thinking cap.

In some circumstances, this might feel liberating. You are in charge of your thoughts based on a knowledgeable friend with references to back up different claims.  But if you’re in your lows, or if you are in a desperate situation? This is too much – you might even find this friend irresponsible because you wanted advice to follow. In such a scenario, another friend – a more charismatic, authoritative figure who TELLS you specifically how to think with absolute certainty, and taking that red pill is all of a sudden more appealing.

So… is science doomed to lose the trust over pseudoscience?

Through the lecture, we explored further case studies of pseudoscience, attempts to change people’s minds, and personal experiences shared by Prof. Chang.  But for today, I would like to round up with one of our key ongoing questions as CUSAP: What can science as a community do to be more trusted?

There is unfortunately no simple or singular answer (and yes, this is a cheat code academics use to jade the rest of the society, sorry). However, I think scientists themselves have a lot more we can do to change how society perceives science. I will even call it mandatory for scientists to care about this question and public communication of our work. Combatting the distrust should be the paramount importance to our profession, because we didn’t go through the painstaking process of all these degrees to be demarked as fraudsters, right? Are we not searching for truth to better understand the world? To better the world as we know it?

Partly, I declare this because I understand that it is not easy. Despite how Hollywood may portray “academic figures” like Nazi-fighting Indiana or tech-superman epitomized by Iron Man, science IS NOT THAT GLAMOROUS! Spending 4-6 hours (or more….) tending your cells (or counting god forbid), running around the field, and weekend lab time to feed your cultures – you may be surprised at how much “tedious” stuff takes place under the hood. Of course, for some, every moment of this work might be a pure joy – because they LOVE what they do -. But for many, like all jobs, there can be bits that we don’t particularly love.

But then what keeps us driving is the sheer belief that this search into reality will mean something. For medical researchers, this may be for our health. And how ironic is it if we are stuck in our ivory tower preaching our findings to better the society and no one trust what we say? Maybe it’s time to say goodbye to the genius scientist archetype or the heroic tech-bro myths, take off this facade of “logical (and thereby superior), non-emotional intellectuals”, and embrace our emotional drive. Because we care about facts AND emotion, for we love and care about the world of wonders – including everyone that inhabits it 🙂

If this kind of science philosophy is your spiel or you just want to learn some new cool misconceptions surrounding science (and trust me, just being a STEMM student does not make you immune to this!), consider joining us at the CUSAP mailing list or Instagram. Come say hi at one of the upcoming events if you can make it.

Campaign Catch-22: Why Election Campaigns Fail at Changing Minds

Author: Mohith M. Varma (Social Secretary)

U. S. election campaigns rank among the most expensive in the world. Even with vast investment in advertisements, rallies, and canvassing, election outcomes often depend on just a few battleground states, while most American states grip firmly onto a given political direction. Why does this happen? Why is it that, despite billions of dollars spent to induce a shift in how people think, voters often appear to have already made their minds up and refuse to budge?

At the heart of this mystery lies a possible contributor: an evolutionarily acquired aspect of human nature that may have once served us well. Consider our ancestors in the time that survival hinges on the swift choice. If you were part of a small tribe and a sudden threat approached-say, a wild animal or a rival group, quickly sticking to a decision could be the difference between life and death. Doubt, hesitation, or second-guessing could have catastrophic consequences. The inclination to accept initial judgments and avoid uncertainty— this tendency is so practically adaptive that it became an evolutionary imperative for survival.

In modern life, this psychological wiring still affects us. We are predisposed to defend once a decision has been made about a potential candidate or political position. Changing views may, at times, feel like admitting to a prior mistake and can feel uncomfortable too. When presented with evidence that contradicts our beliefs, it is very uncomfortable, especially if those beliefs are tied to our identity and social ties. This leads to confirmation bias, which is essentially the tendency to find only evidence that backs previously held views rather than evidence that may contradict them. If you have already made up your mind that the candidate is not a good fit, you are likely to disregard any evidence pointing in the opposite direction.

People, in a polarized political climate, often surround themselves with similar minded people and maintain those beliefs instead of questioning them. This is even magnified further in the age of social media. We select which information to see in our news feeds (this effect is further amplified by social media algorithms), we add friends with who we agree, and we go to groups that hold true to what we now know to be true. This leads to an echo chamber, in which the alternative views are not only ignored, but often actively discredited. Hence, despite vast sums of money spent on campaign ads, the impact may be small. In fact, it is known that the more people are exposed to information that challenges their opinions, the more those existing opinions can harden. It is known as the backfire effect. In other words, the more forcefully a campaign seeks to change the person’s opinion, the more entrenched they may become.

Moreover, identity politics is another aspect that cannot be sidestepped. It’s more and more part of people’s identities to associate with a particular party today. The political gap today is not just one where both sides have different policies but one where both belong to bigger cultural movements. For many, the thought of changing sides is seen as abandoning some greater group within society and can be quite uncomfortable. In fact, billions of dollars to run presidential campaigns are not just about swaying minds but galvanizing the base. In states that are reliably red or blue, campaigns may basically be about maintaining each front in turn, sustaining level of support, and ensuring turnout.

This brings us to: What can be done to help people break free from these entrenched positions? How can we encourage more thoughtful decision-making among voters?

Fostering open-mindedness is an important first step. When people feel appreciated and understood, they are more likely to change their beliefs. We need to establish forums for genuine dialogue rather than merely criticizing the other side or denigrating those who hold different opinions. Listening to others without judgment and understanding their concerns can help bridge the divide, making it easier for people to reassess their views. This reflective form of empathy can result in better, more nuanced decisions. Whereas interactions that devolve into personal attacks—common in polarized debates broadcast on news channels and even on social media posts—tend to reinforce divisions rather than promote reflection or change.

Misinformation is one of the main factors that strengthens deeply entrenched views. News and social media can form echo chambers where users opt to see content only confirming their beliefs. Campaigns could do more to directly address misinformation by promoting fact-checking initiatives, partnering with trusted sources, and encouraging voters to seek reliable, evidence-based information. Another way to prevent the spread of misleading or deceptive narratives is to be transparent about campaign messaging, such as clearly stating the source and backing for policy proposals.

Lastly, we need to invest in an informed electorate. Voter fatigue resulting from the sheer volume of political commercials and messages can lead some people to refrain from participation or withdraw from participation. Campaigns are most effectively optimized by including information in the materials on the concrete issues in question, as opposed to techniques based on negative advertising or politically driven narratives.


Although these solutions may appear idealistic, they undeniably do not lie in a realm of impossibility. The art of any successful implementation of a solution will be in enduring effort. In this sense, such approaches are not sensation-grabbing, but instead will require a generation-long, systemic change in the cultural drive toward the making of an informed, participatory, and more reflective voter. It is going to take time, resources, and effort, but if the right strategies are put in place, change is not only feasible-it is critical for a healthier, more effective democracy.