Diving into Milk Myths

Shining a light through the murky waters of milk

Joris Driepinter, a fictional character from the Dutch milk campaign in the 1960s and 1970s

Author: Meira van Schaik (CUSAP Blog Chief Editor)

If you grew up in the Netherlands, you probably remember the famous slogan “Melk is goed voor elk” (“Milk is good for everyone”). Dutch parents – mine included – accepted it as gospel, thanks in part to decades of government-supported dairy propaganda. The U.S. has its own version of the same story: federal dietary guidelines promoting three servings of dairy a day, government-funded “Got Milk?” campaigns, and even literal cheese vaults where the USDA stores over a billion pounds of surplus cheese because the government buys up excess dairy to stabilize the industry.

Milk is not just a beverage. It is also a government-backed project. But is it actually “good for everyone”? Or have we been swallowing a convenient myth?

The Nutrient Argument: True but Incomplete

Milk undeniably contains beneficial nutrients: protein, phosphorus, calcium, and vitamins such as B2, D, B12, and -if full-fat- vitamin A. Some studies show that dairy intake is associated with a slightly reduced risk of colon cancer, though calcium supplements show a similar effect, suggesting it’s not dairy itself but calcium that plays the role.

But the story gets more complicated when we look at one of milk’s most persistent claims: Is dairy essential for strong bones?

Despite decades of such messaging, studies comparing fracture risk between high and low milk consumers have produced inconsistent results. One large Swedish study even reported higher hip fracture risk and mortality in women with high milk consumption. International comparisons show the same: countries with the highest dairy consumption do not have lower bone fracture rates than countries where milk drinking is minimal.

Calcium is indeed important, but it can be found in many foods. And while the vitamin D and phosphorus found in milk help calcium absorption, the notion that calcium cannot be absorbed from plant-based sources is untrue. Although some plant foods contain absorption inhibitors (like oxalates and phytates), there are several plant sources with calcium bioavailabilities comparable to or exceeding that of milk.

The Sour Side of the Dairy Story

A major, often overlooked reality is that around 70% of the global population is lactose intolerant. For these individuals, milk is not a nutrient-rich “superfood” but a common trigger for bloating, cramps, gas, and diarrhea. Promoting milk as universally healthy ignores the biological fact that the ability to digest lactose into adulthood is a relatively uncommon genetic adaptation rather than the global norm.

Beyond lactose intolerance, dairy carries additional health considerations. Milk allergy, especially in infants and young children, remains one of the most common food allergies worldwide. Full-fat dairy also contains saturated fat and naturally occurring trans fats that may contribute to elevated cardiovascular risk when consumed excessively. Environmental contaminants such as dioxins, heavy metals, and pesticide residues can enter the milk supply through feed and agricultural conditions. Hormonal exposure adds another layer of complexity: while the European Union bans hormone use in dairy cows, the United States permits certain hormone treatments, fueling ongoing debate about potential long-term effects.

A worrying new trend promoted by some “wellness” influencers is the resurgence of raw milk consumption. Proponents claim it’s more natural and therefore healthier, but this is misleading at best and dangerous at worst. Raw milk can harbor harmful pathogens, including Listeria, E. coli, Campylobacter, and Salmonella, which can cause serious illness. Pasteurization, invented by Louis Pasteur in the 1860s and refined for milk in the early 20th century, was a major breakthrough precisely because it made milk safe to drink. There is no reason to romanticize pre-pasteurization practices; drinking raw milk today is a step backward for public health.

There is also an ongoing scientific discussion about dairy and cancer risk. Some observational studies report that high intakes of cheese or high-fat dairy are associated with increased breast cancer mortality, and a large meta-analysis has found that higher dairy consumption correlates with an increased risk of prostate cancer. These results do not prove that dairy causes cancer, but they do complicate the simplistic idea of milk and dairy products as universally beneficial foods.

The Animal Welfare Reality Behind the Carton

Most consumers never see the system that produces their milk, and the reality behind it is rarely discussed. To produce milk continuously, cows must give birth roughly once a year, and nearly all of these pregnancies are achieved through artificial insemination. Shortly after birth, calves are usually separated from their mothers (often within hours), a practice that causes significant distress to both animals. Female calves are raised to become the next generation of dairy cows, while male calves are typically slaughtered for veal or beef. Numerous reports describe mother cows calling out for their calves for days after separation.

Although a cow could naturally live for around twenty years, her life in the dairy industry is far shorter. Most are slaughtered at four to six years of age, when declining milk production makes them less profitable. Throughout their lives, many cows spend most or all of their time indoors; some remain tethered in place, and space or bedding can be minimal. The physical demands of producing such high volumes of milk contribute to common health problems, including lameness, reproductive complications, and mastitis, a painful udder infection. Routine procedures such as dehorning, which in some settings may still be performed without adequate anesthesia, add further stress.

In the United States, the use of rBST (recombinant Bovine Somatotropin, a synthetic growth hormone banned in the EU and Canada) remains legal and is associated with elevated rates of mastitis and lameness. All of this occurs within an industry where modern US dairy cows now produce more than four times the amount of milk they did in 1945, the result of selective breeding and a system of exploitation designed to maximize output.

In the EU, overall animal‑welfare legislation is among the world’s strictest, and individual countries and industry groups have developed welfare criteria for dairy cattle, though EU‑wide standards for dairy production are limited and often rely on voluntary certification schemes rather than binding regulation.  While existing rules set basic welfare requirements (such as adequate feeding and space), common industry practices, including early separation of calves from their mothers and slaughter of cows once milk production declines, remain legally permitted.

Environmental Costs: A Global Footprint Hidden in Plain Sight

Dairy farming carries a significant ecological footprint across the world. In major dairy-producing regions (like the Netherlands for example) cows generate enormous amounts of manure, often far beyond what the local land can naturally absorb. When this waste builds up, excess nitrogen can cause damage to soil health and aquatic ecosystems.

The dairy industry is also a major contributor to methane emissions. With more than 270 million dairy cows worldwide, the sector releases large quantities of this potent greenhouse gas. Countries have to find a way to reduce these emissions if they want to fight climate change. Methane emissions from the dairy industry have begun to feature in climate policy discussions. For example, several major dairy companies pledged at COP28 (the 28th UN Climate Change Conference) to publicly disclose their methane emissions, and the EU has approved feed additives aimed at reducing enteric methane from cows. Nevertheless, these companies are not bound by specific methane reduction targets, and these voluntary measures remain far too limited to cut the sector’s methane emissions at the scale and speed needed to meet global climate goals.

Land use further amplifies dairy’s environmental impact. Millions of hectares are devoted to growing feed crops such as corn and soy, relying heavily on fertilizers, irrigation, and pesticide use. In tropical regions, large-scale soy cultivation tied to global livestock demand has contributed to deforestation in places like the Amazon. Meanwhile, in countries that produce the dairy, natural grasslands and diverse ecosystems have been replaced by large dairy operations, accelerating habitat loss.

These issues illustrate how the modern dairy industry affects ecosystems far beyond the farm itself, harming the environment on a global scale.

Plant Milks: A Good Alternative?

Critics often claim that plant-based milks lack nutrients and therefore cannot replace cow’s milk. This overlooks the fact that the nutritional profiles of plant milks vary widely, and each offers different benefits depending on someone’s dietary needs and preferences.

Oat milk, for example, is frequently criticized for being high in carbohydrates, yet those carbs come with soluble fiber that lowers the glycemic index and supports digestion. Most commercial oat milks are also fortified and provide minerals such as calcium, potassium, and iron, making them more nutritionally robust than critics suggest. 

For those seeking a low-calorie option, almond milk is typically the most suitable. Almond milk also contains low saturated fat, and high levels of vitamin E and is often calcium-fortified. However, it is important to keep in mind that almond milk is much lower in protein compared to other types of milk.

Soy milk remains the closest nutritional match to cow’s milk, offering complete protein and, in fortified versions, similar levels of calcium and B12. Despite this, soy is one of the most heavily demonized plant-based substitutes. The long-standing myth that soy increases breast cancer risk is outdated; the phytoestrogens found in soy are not the same as human estrogen and do not have the same effects on our bodies. These plant estrogens (or isoflavones) are very weak compared with human estrogen. They bind much less strongly to estrogen receptors and tend to interact with a receptor type (ER-beta) that is linked to prevention of excessive cell proliferation. In fact, one North American cohort study found that replacing dairy milk with soy milk was associated with a lower breast cancer risk. Similarly, a study in Japan showed that consumption of soy products and isoflavones was linked with a decreased risk of prostate cancer in men.

Environmental criticisms of plant milks also tend to be misplaced. For example, almond milk has been criticised for its large water requirement, but with 371 litres of water needed to produce one litre, this remains far less than the global average of 628 litres needed for one litre of cow’s milk. Furthermore, while soy cultivation is linked to deforestation, nearly all of that soy is grown to feed livestock, not to produce soy milk. Consuming soy directly is far more land- and resource-efficient than first feeding it to animals and then consuming their milk and meat. 

So… Is Melk Really “Goed voor Elk”?

Milk is undeniably nutrient-rich and can be part of a healthy diet for certain people. But the blanket claim that milk is essential and universally beneficial is outdated. Milk is not good for the majority of the world population who are lactose intolerant. It’s not good for the cows, whose exploitation fuels the industry. It’s not good for the environment, with substantial methane emissions. And its health benefits are more nuanced than the dairy lobby suggests.

The real question isn’t whether milk is “bad”: it’s why we’ve been told for generations that it is universally good. Perhaps the answer has less to do with human health, and more to do with politics, economics, and a very powerful dairy industry.

War on Paracetomol

Author11: Isha Harris(Co-President)

Paracetamol doesn’t get nearly enough credit as a wonder drug. While not as acutely lifesaving as penicillin, the quality of life improvement multiplied by the billions of people who use it means that paracetamol offers a pretty insane contribution to human wellbeing.

At any hint of a headache, I pop a couple pills, and am sorted out in 20 minutes. This saves me a day of pain, and the accompanying physiological stress – the blood pressure spikes, heart rate increases, and general bodily strain that prolonged pain can cause. It’s possible I go overboard with the paracetamol: before an exam, I usually take a few just in case a headache strikes. There’s probably a <1% chance of this happening, but given the huge stakes of remaining headache-free for the exam, I figure it’s worth it. I’ve also carefully optimised my coffee regimen, balancing the optimal buzz with avoiding bathroom breaks. So I arrive at every exam drugged up, ready to lock in. Maybe it’s just the placebo effect of feeling like I’m doping, but if it works it works.

This habit has been received extremely badly by friends and peers. Most people have a much higher threshold for taking paracetamol than me. They gasp at my willingness to take it for ‘minor’ discomfort, and if I suggest they do the same, I’m met with various justifications: toxicity, tolerance, making the headache worse. Or the classic ‘just drink water’, as if hydration and medication are mutually exclusive. Instead of resolving their discomfort quickly and safely, they’ll endure hours of decreased productivity or outright misery.

I think this is quite bizarre, and have always just assumed they were wrong and continued to sing paracetamol’s praises. But this is admittedly quite vibes-based of me, and as a good empiricist, I figured it was time to look into the data before I continue to assert that I’m right. Here’s what I found.

On paracetamol toxicity:

  1. For patients without prior health risks or sensitivities, paracetamol causes few to no side effects at recommended doses. A paracetamol dose has a few slight immediate side effects. For example:
    • 4 mmHg BP increase in already hypertensive patients. Ref
    • ALT (a liver enzyme) levels rise slightly, but this is comparable to the effect of exercise. Ref
  2. Prolonged, daily use at maximum dosage *might* pose risks. Long-term use has been linked to possible increases in blood pressure and cardiovascular events, though findings are inconsistent. For example:
    • Using paracetamol for more than 22 days per month raised the relative risk of cardiovascular events by 1.35 in smokers but showed no increased risk in non-smokers. Ref
    • Some studies suggest a potential association with cancers like kidney and blood, but again, evidence is limited.
  3. Medication overuse headache, or ‘rebound headache’, is a genuine risk for very frequent users. With time, regular overuse can lower your baseline pain threshold, leading to persistent, often severe headaches that don’t respond well to analgesics. It can be seriously disabling. But in the case of paracetamol and ibuprofen, MOH typically only develops after taking it on 15 or more days per month for months or years. Significantly higher than the occasional use I describe.
  4. Paracetamol is safer than other painkillers. Ibuprofen, while still extremely safe, has higher risks of stomach irritation and other adverse effects. Ref
  5. Overdosing is very dangerous. Paracetamol has a narrow therapeutic window, meaning the difference between an effective dose and a toxic one is small. Excessive intake can cause severe liver damage. Ref

Some other common myths:

“It interferes with your fever, which we’ve evolved for a reason.”

  • The data suggests paracetamol might only slightly prolong the duration of an illness (a few hours), if at all. Ref

“You’ll build a tolerance, and it won’t work anymore.”

  • I couldn’t find any studies at all that suggest paracetamol tolerance.
  • Paracetamol works via COX enzyme inhibition, not receptors like opioids or caffeine, so tolerance couldn’t develop by the same mechanisms anyway.

“Pain is natural, and good for you! It’s better to let your body build resilience.”

  • While much is said about the risks of taking paracetamol, few people talk about the cost of untreated pain.
  • Pain isn’t just unpleasant – it’s physiologically damaging. Ref It triggers the stress response, engaging the sympathetic nervous system and releasing adrenaline, which raises your heart rate and blood pressure. And it makes us miserable – mental state is a huge, and overlooked, predictor of human health.

In conclusion, paracetamol is incredibly safe when used correctly. Occasional, moderate use – like my once-a-fortnight headache relief – is nowhere near the thresholds associated with risk.

Purity culture

I think that the aversion to paracetamol is a symptom of modern purity culture. There’s a growing tendency to glorify ‘natural living’, and to believe that struggling through life without help from modernity is something we should strive for. I disagree – enduring pain unnecessarily doesn’t make you virtuous; it’s just bad for you.

There are plenty of other examples.

  • Reluctance to use epidurals during childbirth. And the rise of home births. Epidurals are safe; home births are not. But people have got it the wrong way round, because they assume natural = good.
  • Washing your hair less is good for it. I too was taken in by this as a teenager, enduring greasy hair and being miserable for days. But one day I remembered I have free will, and didn’t actually have to live like this. And I have seen no difference in my hair whatsoever.
  • The ChatGPT backlash. Camfess is currently embroiled in AI debate, with Cantabs coming up with all kinds of bizarre reasons to be against it (water/energy use, Big Tech and capitalism is bad, sanctity of art, weird claims about training data being exploitative).

The obsession with preserving ‘sanctity’ is maddening. Clinging to tradition for its own sake; suffering through inefficiency for strange abstract reasons of nobility. I hear this depressingly often from my fellow medical students, who claim that a future of AI in medicine threatens the sanctity of the patient-doctor interaction. But if AI can deliver zero wait times, more accurate diagnoses, and better outcomes (as the evidence suggests it can) doctors are Hippocratically obligated to endorse its rollout.

I have a hunch that this purity culture is a legacy of religion, which has a habit of resisting perfectly benign pleasures, like masturbation, for no reason. A lot of people around me are turning to Buddhism (Ref), which I find the whole shtick to be arguably the endurance of suffering. Each to their own, but it doesn’t seem like a very pleasant life, or really that necessary.

Humans have always resisted change, clinging to the familiar even when it doesn’t serve them. It’s why progress, whether in technology or social norms, is so often met with opposition. This is even true amongst many progressives, who are bizarrely circling back to conservatism on many fronts. The vast majority of the AI luddites I have encountered are leftists.

It’s such an exciting time to be alive. Technology and medicine make our lives easier, freeing up time and energy for productivity – or simply pleasure. So embrace it! Life is for living, not enduring. This means using the tools available to us, and supporting innovation to make even more.

The moral of the story: don’t lose an entire day to a headache. Pop that paracetamol.

↩︎

  1. This article was originally posted on Co-President’s personal blog and adapted for publication here for CUSAP.   ↩︎

Plague doctors were onto something?? (albeit for a wrong reason)

Author: Maya Lopez (Blog Chief Editor)

On June 13th, 1645, George Rae was appointed as a second plague doctor in Edinburgh. This was following the first doctor John Paulitious, who died due to, well, plague. While plague was already an endemic disease in the 17th-century UK, this outbreak was one of the worse ones. The 11th major outbreak in Scotland and over in London, this particular outbreak was also known as the Great Plague of London (albeit the last of this scale, hence the name rather than having the highest death toll than earlier iterations). With the rising death tolls in the city of Edinburgh (which will ultimately culminate in 1000s by the end of this outbreak), it was not particularly surprising that the doctors themselves would die from contracting the plague. Such (increasingly) high-risk jobs naturally saw a salary raise, culminating in a monthly rate of a whopping 100 Scotts a month by the time Dr. Rae was appointed. However, Dr. Rae survived his term, and thus he was only paid his promised salary slowly over the decade after the plague epidemic ceased after negotiation. This is not to say that the city council provided a generous pension after his civil service, but rather the council simply did not have the cash to pay him on the spot because, well, they didn’t expect that he would come out of the pandemic alive! (It is believed Dr. Rae never received his full share in the end.)  Is this to say that he was just a lucky soul who had a super immune system? When I heard of this fascinating tale of the man who once walked the narrow streets of Mary King’s Close, Edingburgh, I was extremely fascinated by his secret of survival in a disease where with the bugonic plague, you have roughly 50:50 chance of survival and if it was pnumonic plague, well… it’s nearly always leathal with the treatment options available at the time. So to me, this spoke, he avaded contraction itself – but how? He was actively going out of his way to inspect the sick, and these ultra-narrow, multistoreyed, alley-houses are not what I would call the best example of well ventilated environment. And his most likely secret (of course, it may be that he did have excellent health and an immune system) was no other than the iconic symbol of plague doctors – their outfit.  

How they thought you could catch the plague in the 17th century

Let’s go back a step into the 17th-century body of knowledge about plague. At this point, it was already an endemic disease with multiple outbreaks for centuries, so it was not a completely foreign disease in Europe. While by this time, the Renaissance and Enlightenment were slowly recovering the knowledge loss and new knowledge delay throughout out Middle Ages in Europe, a lot of their medical knowledge was still mostly based on classical antiquity and the Middle Ages, which naturally framed how they perceived and viewed the mechanism of the plague. Plague was thought to be spread based on Miasma – an abandoned medical theory where “poisonous air” (often of bad odor) carries the disease. This theory was deeply rooted throughout the Middle Ages and was the predominant theory used to explain outbreaks of various contagious diseases (like cholera, chlamydia, or the Black Death) that occurred prior to the advent of the germ theory. Additionally for plague, this miasma theory was further combined (?) with astrology in 14th centrury France to elaborate on its mechanism, where 1345 conjunction of “hot planets” (apperantly Mars, Saturn and Jupiter…don’t ask me why) in the zodiac sign of Aquarius (a wet-sign!… whatever that means) took place. This supposedly caused unnaturally hot and moist air to blow across Asia toward Europe, leading to the catastrophic Black Death. While I’m not sure if such a cosmos-level mechanism has been described for EVERY plague outbreak, the idea linking it to some sort of bad things coming from pestilent air was the general view on how the disease came to be, and this naturally affects how the disease prevention would be approached.

When it comes to how people thought the plague manifested in our bodies, this explanation was often based on humorism. This is yet another abandoned medical system that originated from ancient Greece and was upheld throughout Europe and the Middle East, nearly consistently for 2000 years, until, again, cellular pathology explained things otherwise. It is a fairly complex system (and I am NOT going to explain the full details today), but essentially, the plague, like many diseases, was thought to be a bodily result of imbalances in the four humors that constituted our bodies. Particularly, the doctors identified that with the bubonic plague, which results in bubo formation (the stereotypical pus-filled swellings) especially around groins, armpits, and neck, and saw this as evidence of the body attempting to expel humors from the nearest major organs. This results in historical treatments that focus on “expelling” these bad humors by bloodletting or diets and lifestyle coaching that will balance the humors (like cold bath + avoiding “hot foods” like garlic and onions (???) apparently). It was also said that some doctors (and religous services?) provided additional service at a fee, which may include potions and pastes, but as far as I can see, by 17th century, more of the “out of the box remedies” like “Vicary Method” (look up with your own disgretion, but it essentially involves somehow transfering the disease to chicken in a rather graphic way, until the person OR the chicken died), seems to have died out of popularity. However, in cases where these measures aren’t enough and bodies are piling up (which unfortunately was often the case with outbreaks), generally the effort was focused on preventative measures rather than treatments. Traditional approaches includes house hold level quartine, routine searches and removal of deceased by council appointed services, smoking of “sweet smelling” herbs to combad the evil sent, banning of public gathering, and cats and dogs were killed (and this we will learn that it may not been just horrible but double further worsen the situation).

How to catch a plague (according to science)

But okay, what REALLY causes the plague, and what do we know of this disease? You might have some vague idea that this has something to do with rats, which is not completely wrong, but the real mechanism is essentially a blood-borne vector disease, which is the pathology lingo to say that it’s a germ-caused illness transmitted through blood. Blood? Well, not necessarily just of humans, but let me draw you a picture, as I heard it on one of my favorite podcasts.  One hungry flea jumps onto a rat for a blood meal. But oh, no, this rat has Yersinia pestis (the real culprit bacteria behind the whole massacre) in it! So this bacterium gets into the flea and multiplies in its tiny stomach. Within 3-9 days, this poor little flea, now hungry again but super queasy from overflowing bacteria in its tummy, will try to take another blood bite from a new rat it landed on and ends up throwing up – rat blood and the bacteria – but now in quantities of 11,000-24,000 Y. pestis. Once back in mammals, this parasite is in a different life cycle phase and will enter the lymphatic system, duplicate until it eventually the infection spreads to the bloodstream, to the liver, spleen, and other organs. This bacteria can infect over 200 species, but their primary hosts’ (ie flea’s) primary host like Ratus ratus (sewer/black rats) tend to have mild resistance.  This may be allowing for asymptomatic carriers (ie immune system keeps the bacterial duplication/symptoms at bay), and with their relatively high replacement rate, it seems like the natural infections are less of a trouble for these rats. (And see? This is yet another reason why we should’ve kept the cats to keep rats at bay!) However, when the infection happens to humans, the story’s different.

In Homo sapiens for example, the diease can manifest (depending on what type you contract as well) in three ways: bubonic, septicemic, pneumonic. In bubonic plague, following the incubation period of between 1-7 days, the infection spreads to the lymph nodes, leading to the infamous bubos forming – the swellings we discussed earlier that doctors observed that are essentially the incubator full of bacteria and pus. (And yes, this is the one that most people probably imagine the plague to look like on a patient.)  With this type, you actually had roughly a 30-60% chance of survival despite the horrendous visual (more on this later). These patients often also experience other symptoms like fever, chills, head and body aches, vomiting, and nausea. Septicemic plague is the version where the bacteria (say those that overflowed from the swelling lymph nodes or a direct flea bite into the bloodstream) enter the bloodstream, resulting in sepsis. Like most sepsis, left untreated, it’s almost certainly lethal, with a mortality of 80% or 90%. And at this stage, as well as the bubos themselves, can result in localized necrosis, where the body tissues usually from the terminal area like fingers, feet, nose, etc, die locally, turning black (hence the name, “Black Death”).   This is nasty enough, but the scariest variation is probably the pneumonic plague. This, unlike bubonic plague, does not form the characteristic swellings. Fundamentally, to contract the two earlier variants, the infected blood needs to go into you either via a flea bite or with lots of contact with buboes. But with pneumonic plague, it can also be contracted as an airborne disease. The infection takes place in the lungs, resulting in infectious respiratory droplets that can also be transmitted directly from human to human. Furthermore, while the pneumonic plague patients are said to be most infectious at the end stage of their symptoms, their incubation period is really short – around 24h -, and without modern medical intervention (ie, antibiotics!), the mortality is 100%.

Time to call the plague doctor in their OG hazmat suit

So let’s say you were a poor soul after hearing this story who was sent back in time to the 17th century. You notice having the early symptoms of chills and fever, and the buboes are starting to form (which gurgled even according to some horrific accounts!). Time to call the doctor, but if they don’t know the actual cause and with no antibiotics at hand, what CAN they do for you? Besides, it’s not like you need a diagnosis when it’s pretty clear what you contracted, and you had such a high chance of dying at this rate. As described in the first section, it’s true that what doctors could do to effectively treat an individual is limited; hence plague doctors where sometime even seen more synonimous to caller of death because by the time they comes around, there is a good chance for you to be diagnosed as too late and you’re left waiting to die. However, for the neighbors and for public record keeping, it was still a useful service for you to be identified and your house to be marked with a white flag that this household has succumbed to the plague. In other words, while these plague doctors are called “doctors,” they functioned perhaps more akin to public health workers (which is also not surprising that this is the “pre-med school era”, and the credentials behind the beaked mask often varied). While you suffer with fever, you hear the lucky news that, in fact, Dr. Rae may be just able to offer a treatment (given that it appears to be bubonic plague), aside from all of the humor restorative bloodletting: to lance the buboes. This allows the “poison” to run out, cauterizing shut the cleared wound, thus sealing and disinfecting. This was a high-risk treatment in itself, but you managed to survive.

But then you start to wonder, this guy literally just let the biohazard out all over, and how does he manage to survive facing patient after patient? Despite all my debunking of plague treatment tactics in the previous section, this is where the plague doctors, especially their attire, might have been on to something. Amongst his attire, the mask may have been the most iconic, but potentially the most uncertain piece of historical origin that’s worn. However, if it was worn as seen in mid-1600s drawings, a crow-like beak extending far from the face was filled with “sweet smelling herbs”, intending to fight off the “bad air”.  Of course, this doesn’t quite work as they presumed, given that miasma theory was not true. A mask of this sort may have been better than no mask just to give some physical filter, but honestly, the herb-based filtering system is probably not enough to filter out the bacteria of the aerosol droplets coming from pneumonic plague patients (ie, NOT the same standard as modern respirators and clinical masks). The cane that was used to inspect you without touching directly may also have given Rae a social distance measure to “keep away people” (presumably other sick-ish people in streets… while the ethics of that is also dubious, but it was tough times, I guess?).  But the real deal is arguable, the REST of the garment. In fact, in Dr. Rae’s time, he may have been pretty upto date in terms of his PPE game given that the first description that fully resembles what we think of as plague doctor costume shows up in the writing of physician to King Louis XIII of France, Charles de Lorme, during the 1619 plague outbreak in Paris. It was announcing his development of a full outfit made of Moroccan goat leather head to toe, including boots, breeches, a long coat, hat, and gloves. The garment was infused with herbs just like the mask (because, of course, miasmas!). Whether the full credit of this now iconic costume should go to Charles de Lorme seems to be subject of debate. However, this leathery suit did one thing right: it prevented flea bites pretty well. So long as you are extra careful with how you handle the taking off of this OG PPE (and don’t breathe in the pneumonic plague patient droplet), you have a pretty functional protection at hand.

A broken clock is right twice a day – nothing more, nothing less –

So it just so happens to be that Dr. Rae unknowingly (though he may have had sufficient faith in his sweet herbs and leather suits) was geared up to protect himself from the actual culprit behind the plague.  Naturally, I found this to be an emblematic tale highlighting the importance of the correctness of the supporting facts and the logic of a theory, which is indeed a crux of modern science and academia. This may sound obvious, but it’s an important reminder to those who end up in a pseudoscientific line of knowledge (which could be any of us!): just because some specific outcome of the belief system happens to work, the supposed mechanism behind it is not automatically correct. Clearly, with the germ theory falsifying the miasma theory, the leather hazmat suit cannot be used as evidence to say that the miasma theory is correct: it’s just not letting the flea bite.  Conflation of partial truth and correctness of the whole theory is perhaps a philosophical one as well, given that it’s sometimes easy, by human nature, to conflate things that are happening and ought to happen.   

But this is also a lesson for pseudoscience skeptic thinkers: just because something was established or mixed in the pseudoscientific rhetoric, the individual practice/claims/results are not automatically entirely false.  And this is a moment that we all need to be honest ourselves – have we previously dismissed practice or ideas just due to the way it was presented?  Of course, this is not to say that we should actively praise every single little kernel of truth mixed in the pseudoscience rhetoric, which may inevitably be overly assigned credibility.  Heck, in fact, the mixing kernel of truth is indeed a tactic a “sciencey writers” can employ as well.  However, if we decide everything is pseudoscientific based on when/who/where/or the context rather than the content, isn’t this attitude in the very nature of pseudoscience, where we are letting our preexisting notions and biases determine our lens to view “truth”?  So instead of praising individual kernels of truth, let’s acknowledge them as what they are; that is correct; but in the same breath we should be able to say: but doesn’t mean the rest is correct because of blank or it’s not tested.  This is an intentional communication that indeed requires more effort, and if done wrongly, it may still give the same dismissive debunking effect, which could spiral pseudoscientific believers into more pseudoscience.  Therefore, let us practice this fine-resolution distinction of science and pseudoscience and use this to PIVOT the conversations, so that we can invite everyone in the conversation to a factual exploration of intellectual curiosity (instead of saying like “medieval doctors had no clues about bacteria (indeed), so they did everything wrong (see the issue here?).”

And after all, it is important to acknowledge the intention behind some of the pseudoscience/outdated knowledge. It’s not always from malicious intent, unlike some disinformation where one can or DOES actually know better, which should be tackled with fury than these plague scenarios. For example, this miasma theory in a large sense can still be seen as an attempt to conceptualize contagious disease – it was a protective and survival instinct justified with a set of logic back then, and rotting smell is probably a bad sign anyway.  Humorism (which is bona fide pseudoscience in modern medicine) was also wrong and largely unscientific, but it was perhaps an attempt to reconsider nutrition and hygiene practices. So they are wrong, but people were trying to survive, and especially when modern scientific investigation tactics and tools were unavailable, I find something beautiful in humanity still managing to land on “tried n true method” with some kernel of truth that inevitably did protect lives, with many missteps along the way which cost lives.  It is a history of H. sapiens grappling for truth for survival. Acknowledge, and then further explore: but now we know more about these pesky diseases, and we even know why some parts were wrong, while why some parts were right!  So keep thinking, keep asking, and keep talking, and don’t be too scared about correcting or being corrected; and let us all appreciate our inner scientists and our desire to just approach the truth.  And of course, don’t forget to wear adequate PPE (maybe not a leather mask and suits in this day and age) when you are a bit under the weather and you want to keep your friends safe.  Let the fresh air in and ventilate; maybe not to clear our miasma, but to circulate air and keep virulent particles away.  And like my favorite podcast always says, “Wash your hands; Ya filthy animals!” 😉

Recommended Listen/Watch:

Amazing podcast series by two scientists: Erin and Erin.  This episode is a major source of the historical and biological information in this article:

https://thispodcastwillkillyou.com/2018/02/10/episode-5-plague-part-1-the-gmoat/

Something shorter and eye-catching? This video will probably give you a big appreciation of all the illnesses our ancestors were often combating and we’re pretty lucky to not have to face them as much or at all! (It can get visually horrific, so please watch with caution.)

https://www.youtube.com/watch?v=6WL5jy2Qa8I