Can we buy clean air?

Air Pollution in Mumbai, photo taken by Kartik Chandramouli

Author: Arjun Kamdar

Over the last few weeks the air pollution levels in India have reached extremely hazardous levels. On several days, seventy-four of the hundred most air polluted cities were in India. This is a serious crisis that is impacting everything that breathes, and is rightly being considered a public health emergency. Among the many marketed solutions, one caught my attention and is deeply alarming: wearable air purifiers.

Searching for solutions

Understandably, this crisis has people scrounging for solutions. The idea of a small, high-tech device that one can carry around and promises to purify the air has intuitive appeal. This little device costs about £30, it comes via a glossy website, uses the word ‘scientific’ in copious amounts, and has pastel colour options. All that seems to be missing is a man in a lab coat smilingly recommending this as the ultimate solution. Fundamentally, this sells the idea that air can be privatised – this little rock around one’s neck can create a portable halo and emits negatively-charged anions that ‘attack’ the bad particles to ‘purify’ the air. There is one major problem; it does not work. 

Air as a public bad

For decades, India’s urban elite have shielded themselves from the failures of public systems. Healthcare, education, security, transport – most of these have been informally privatised. Those who can afford it buy their way out of poor public infrastructure.  

Air, however, is different. It is defined as a public good, or in this case, a public bad. This means that it is (1) non-excludable (no one can be prevented from breathing it) and non-rivalrous (one person’s use does not reduce availability for another). The textbook example of a public good is, ironically, a fireworks show: no one can be excluded from enjoying it, and one person’s enjoyment does not diminish the experience for anyone else. The same logic applies to air: we all share the same air, and it is impossible to contain it in one place or prevent someone from breathing it. There are no neat delineations between indoor and outdoor air.

Smog blankets buildings in Gurugram. Photo by Niranjan B.

The pseudoscience of wearable air purifiers

This is why air pollution demands collective action. No technological innovation can bypass this. While using masks or creating ‘clean air bubbles’ by installing indoor filtration systems or based on robust technology like HEPA filters can help to some extent, eventually, one must step outside or open a window. Wearable devices are marketed as the silver-bullet solution, despite there being no real evidence for their efficacy – neither in practice nor for what they market as “Advanced Variable Anion Technology”.

The scientific claims behind many of these products crumble when looked at closely. Companies cite ‘certifications’ and ‘lab tests’ from prestigious institutions like the Indian Institute of Technology (IIT), one of India’s top engineering and science universities, a well-chosen appeal for the target audience of India’s urban elite and upper-middle class. However, the referenced tests have a fundamentally flawed study design, with too few repetitions to carry any scientific weight/value. In some of these tests they burn an incense stick in a sealed chamber suggested to be representative of air pollution outdoors in India, and then measure the reductions in ultrafine particulate matter over time, without any control condition. Such designs fail at both internal validity, i.e., the mechanism of action as well as external validity, since they ignore the complexity of outdoor pollution, which depends on wind, humidity, temperature inversions, particle composition, emission sources, and dozens of other factors. And most importantly, a seemingly endless supply of pollution. These wearable devices may as well be a bunch of flashing lights.

Some of these devices verge on the dystopian. One widely advertised model resembles a potted plant with plastic leaves, claiming that this technology will purify the surrounding air. The irony is stark. Some also offer these devices for corporate gifting. 

Implications of misinformation

If these devices genuinely worked, or even showed promise, they would already be the focus of research and public health practice. Air pollution is not a novel challenge for humanity, and neither is the knowledge of ions. We understand these technologies well, and the reason they have not advanced further is simple:  because science has already shown that this is a dead end.

There are two critical implications of this misinformation. One, it is unethical and exploitative, and two, it can crowd out motivations for the systemic change that is needed to tackle this large challenge.

These devices exploit people’s vulnerabilities – the legitimate fears that people have for themselves and their loved ones is used to turn a quick buck. Selling untested gadgets during a public health crisis is a dangerous manipulation of public fear for personal gain. The burden of proof of the efficacy of these devices lies with manufacturers, it is not the job of citizens or scientists to test and gather evidence that they don’t work. This is understood in section 2(47) and 18-22 of the Consumer Protection Act, 2019 on ‘unfair trade practices’ and the penalties for misleading advertisements.

Customers of these products I spoke with mentioned that while they are sceptical, “it might at least do something, if not as much as these companies promise”. While the sentiment is understandable, this is a very dangerous narrative. A proliferation of this flawed idea that clean air can be acquired through a quick, personal fix, could weaken the pressure on the government to take action and enact the systemic reforms that are needed. This crowding out of motivations is a real threat to movements that require long-term action. Air pollution is a public and collective challenge, impacting everyone from all classes and therefore, could be a catalysing factor for demanding structural changes. The misconception that private, individual-specific solution is a possibility hinders this, leading to a continuation of the status quo.

A member of parliament wore such a device by a company called Atovio a few months ago – this explicit validation by a public figure, even unknowingly, only amplifies misinformation and gives these dishonest claims a misleading legitimacy.

Air pollution shrouds the streets of Mumbai. Photo by Shaunak Modi.

Can air pollution be solved?

It is not an intractable problem; Beijing faced similar challenges in 2013 as did Bogota in 2018. Both cities ramped up their efforts and managed to tackle the seemingly insurmountable challenge of air pollution through the evidence-backed combination of strict emission controls and regulatory enforcement, and transformative shifts in urban mobility and energy use. India can too. 

This potential is evident to India’s citizens; people from all walks of life and classes have mobilised,  organising protests and legal arguments across the country to confront this serious threat.

There is no technology yet that can privatise air. Structural changes in how cities and societies function are the only real solution, and until then air will remain a public bad. Some problems cannot be bought away.

Ig Nobel: The whimsy and the magic of science

Author: Maya Lopez (Co-President)

When the 2025 Nobel Prizes were announced last month, Cambridge’s science enthusiasts and news junkies alike were buzzing with excitement, discussing the laureates, dissecting the research, and tallying college wins. However, I noticed less talk around a month earlier on the Ig Nobels”. Maybe because no Cambridge members were awarded this year? Or perhaps because it’s not serious enough?? … Whatever the reason, today we will take a break from all the rigidity of science and the recent serious concerns around politics contesting science.  Instead, let’s take a look at the whimsical research that is also… seriously a science, which, as Nature once put it, “The Ig Nobel awards are arguably the highlight of the scientific calendar”.

Are Ig Nobel Prizes a real award?

This is one of the top Google searches with the keywords: “Ig Nobel prize”. The answer? YES*. It is a very real award with ceremony and all that has now been going on for 35 years. But “*” was not a typo as it is also, yes, a parody of the all-too-famous Nobel Prize, which probably needs no explanation of its own (hence the namesake and the pun of “ignoble”). For those of you who are unfamiliar, Ig Nobel is annually awarded by an organization called Improbable Research since 1991 with a motto of: “research that makes people LAUGH, then THINK”. This organization also publishes a “scientific humor magazine” (who knew that was a thing?) called Annals of Improbable Research (AIR), so they, in a sense, can be seen as a specialist that focuses on promoting public engagement with scientific research through fun. The Ig Nobel Prizes are often presented by Nobel laureates in a ceremony held at the MIT or other universities in the Boston area. Much like the “real” Nobel prizes, it has different award disciplines like: physics, chemistry, physiology/medicine, literature, economics, and peace, plus a few extra categories such as public health, engineering, biology, and interdisciplinary research. (The award categories do vary  from year to year, though.) The winners are awarded with a banknote worth 10 trillion Zimbabwean dollars (a currency that is no longer used; roughly worth US$0.40), so it’s not really about the monetary value. They also get an opportunity to give a public lecture upon award, but researchers do face the risk of being interrupted by an 8-year-old girl (or, in the case of 2025, a researcher dressed up as one) crying “Please stop: I’m bored”, if it dares go on for too long. The ceremony, as you can imagine from here, has a number of running jokes, and if you are interested, you can watch the whole ceremony of 2025 on Youtube.

Bringing “in” science to the everyday curiosity:

So it’s a parody, yes, but the award does exist and is given to actual researchers. The quickest way to get a sense of the Ig Nobel might be to simply browse the list of research that was awarded prizes. This year, we’ve got:

CategoryTitleReference
AviationStudying whether ingesting alcohol can impair bats’ ability to fly and also‹ their ability to echolocatedoi.org/10.1016/j.beproc.2010.02.006
Biologytheir experiments to learn whether cows painted with zebra-like striping can avoid being bitten by flies.doi.org/10.1371/journal.pone.0223447
Chemistryexperiments to test whether eating Teflon is a good way to increase food volume and hence satiety without increasing calorie contentdoi.org/10.1177%2F1932296815626726
patents.google.com/patent/US9924736B2/en
Engineering designanalyzing, from an engineering design perspective, how foul-smelling shoes affect the good experience of using a shoe-rackdoi.org/10.1007/978-981-16-2229-8_33
Literaturepersistently recording and analyzing the rate of growth of one of his fingernails over a period of 35 yearsdoi.org/10.1038/jid.1953.5
pmc.ncbi.nlm.nih.gov/articles/PMC2249062doi.org/10.1001/archinte.1968.00300090069016
doi.org/10.1001/archinte.1974.00320210107015
doi.org/10.1111/j.1365-4362.1976.tb00696.x
doi.org/10.1001/archinte.1980.00330130075019
Nutritionstudying the extent to which a certain kind of lizard chooses to eat certain kinds of pizzadoi.org/10.1111/aje.13100
Peaceshowing that drinking alcohol sometimes improves a person’s ability to speak in a foreign languagedoi.org/10.1177/0269881117735687
Pediatricsstudying what a nursing baby experiences when the baby’s mother eats garlicpubmed.ncbi.nlm.nih.gov/1896276
Physicsdiscoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can be a cause of unpleasantnessdoi.org/10.1063/5.0255841
Psychologyinvestigating what happens when you tell narcissists — or anyone else — that they are intelligentdoi.org/10.1016/j.intell.2021.101595

I think the goal of “laugh and think” is clearly successful in all of this research.  But speaking of thinking, some of these research topics made me wonder (and maybe you are too): “Why would you investigate that?” (What adult would?) or “Is this real, funded/published research”? What I want to highlight (and what may not be clear from the brief list on the Wikipedia page), is that they all have proper references attached to them. So yes, though their published titles might sound a bit more academic or “stuffy” (though often by not much), they are actual peer-reviewed papers!

Are you ridiculing science?

This question on the official FAQ page caught my attention, because I, as an IgNoble enthusiast, hadn’t imagined any serious criticism against these awards. Digging a bit deeper, I found that decades ago, the UK’s then-chief scientific adviser – Sir Robert May – made a formal complaint request that “no British scientists (should) be considered for an IgNobel, for fear of harming their career prospects”. (Note that the UK, alongside Japan and the USA (no wonder I’m acquainted with this prize), are regulars of this prize as a nation, winning awards nearly every year.) Furthermore, the article reads “He was particularly concerned when ground-breaking research into the reasons why breakfast cereal becomes soggy (by the University of East Anglia) won a prize,” essentially hinting at the concern of public ridiculing science (as a whole?). If you think about it, such a general attitude of “it’s not with the scientific investigation unless it’s clearly applicable/translatable/important” is perhaps far too typical, especially in basic sciences.

However, I think the founder of the prize, Marc Abrahams, had the best defence against the practice of “rewarding silly science”.

“Most of the great technological and scientific breakthroughs were laughed at when they first appeared. People laughed at someone staring at the mould on a piece of bread, but without that there would be no antibiotics… A lot of people are frightened of science or think it is evil, because they had a teacher when they were 12 years old who put them off. If we can get people curious and make them laugh, maybe they will pick up a book one day. We really want more people involved in science and I think the webcast will help do that.”

Slightly on a tangent, but “Maths Anxiety” is a recognized experience that many develop during childhood. While no research might exist on this (yet), I also suspect a similar phenomenon with STEM at large. Sometimes I get comments from students taking humanities subjects (even in Cambridge!) like “wow, you’re doing a real/serious degree”, or “science sounds so difficult”. For some people, “being put off” by science might trace back to a negative experience during their first formal introduction to science as a subject in school. In that case, bringing their interest back to science with all-serious demeanor and stuffy topics might be quite a high barrier to cross. However, looking at the Ig award list from earlier, and how quickly they make you go “huh” after the laugh, I can’t help but think that these funny, curious studies might be the push they need to ignite their curiosity and welcome them back to scientific inquiry without any pressure.

The satire (and controversy?) of IgNobel

That being said, not all IgNobel prizes were specifically awarded to quirky “research that cannot (or should not) be reproduced”. It was also sometimes awarded as a satire. In the recent case of 2020, Ig Nobel Prize for Medical Education was awarded to Jair Bolsonaro of Brazil, Boris Johnson of the United Kingdom, Narendra Modi of India, Andrés Manuel López Obrador of Mexico, Alexander Lukashenko of Belarus, Donald Trump of the USA, Recep Tayyip Erdogan of Turkey, Vladimir Putin of Russia, and Gurbanguly Berdimuhamedow of Turkmenistan. Now, before you start typing away your complaints and protests (or throwing paper airplanes), hear the reason why: they were awarded for “using the Covid-19 viral pandemic to teach the world that politicians can have a more immediate effect on life and death than scientists and doctors can”. I’d say that makes you think quite a bit, especially as a person in the scientific community.

If you consider these instances in isolation, perhaps there is some point to what the former scientific chief advisor was saying, and that a serious researcher might not want to be associated with this prize (kinda like the Raspberry award, I guess?). However, this was apparently not a popular opinion, at least in the UK scientific community, which backlashed at the comment earlier. To this day, we get awardees from the UK in the Ignobel prizes.

Legacy beyond the funny and curious:

Parody and satire, yes, but in case you think this is still a long post for much ado about nothing, as it’s still in the realm of a joke, I want to present you this final case of when these jokes lead to “actual” science (not that they weren’t real science to begin with, but…). Take Andre Geim for instance, who shared the 2000 Ig Nobel in Physics with Michael Berry for levitating a frog – yes, a real frog – using magnets. Ten years later, he went on to win the actual Nobel Prize in Physics for his groundbreaking research on graphene. This itself may sound like a lucky coincidence but it is also worth mentioning that this frog experiment was reported in 2022 to be the inspiration (at least partially) behind China’s lunar gravity research facility.

These are not the only examples where such “silly research” actually ended up having real-world impact and use. In 2006, the Ig Nobel Prize in Biology was awarded to a study showing that a species of malaria-carrying mosquitoes (Anopheles gambiae) is attracted equally to Limburger cheese smell and human foot odor. This initial study was published in 1996, and the results suggested the strategic placement of traps baiting this mosquito with Limburger cheese to combat the Malaria epidemic in Africa. While these applications of the study might not be immediate, I think what allows for this translation (aside from being oddly specific) is partly due to the cost-effectiveness. The more typical “scientific” solution one might envision with disease control might involve genomics, vaccines, or pharmaceuticals. While they are all state-of-the-art and highly effective (and certainly have the sci-fi appeal), the cost both in terms of financial and time resources, can be expensive. Compared to that… cheese? I’m guessing that it’s more budget friendly and easy to implement. This research as well as this year’s award in biology about painting (zebra-like) stripes to cows as a mosquito repellent, all make me re-appreciate that sometimes the viable solution might be something unexpectedly simple and close at hand. These studies show how science, even in its quirkiest forms, can indicate practical and effective solutions to improve everyday lives.

Diversification of sci-comm tactics

Whether you admire the nobleness of the Ig Nobel, think it’s all fun and whimsy sci-comm, or avoid it altogether as an aspiring “serious” researcher, I think this still stands as a rare gem in the diversity of what science-communication can look like. In recent years, “debunking style’ science communication is seemingly (back) on surge, as well as various independent video-based science communication content creators (such as the guest speaker we had last week). In the age where science itself and its institutions are increasingly seen through a critical eye or outright contested, I do understand the urge to fact bomb or even isolate myself in all the “seriousness”. This is especially tempting when we know that some of the fruit of scientific research, like vaccines, can save lives, and we desperately want people to protect themselves. I personally don’t consider myself especially witty, but celebrate those who can masterfully blend research and humor to entice audiences and reignite their interest in science.  Of course, not a single sci-comm tactic is bulletproof – some, like Sir Robert, may find these things distasteful, while others simply prefer something “serious,” and that’s ok. But science as a community might just benefit from having such a quirky tactic under its sleeves, and the diversity in science communication approaches might very well be the best shot we’ve got for this day and age of increasing division. Who knows, maybe some researchers will look into the efficacy of the IgNobel prize headlines against the science-anxiety.

Legality of curses and the “impossible crime”

Author: Maya Lopez (Co-President)

Now that we’re back for another academic year in full swing, summer feels distant. I was back in my home country (Japan) during the break, which was a time for me to indulge in countless horror features and reruns, even before the spooky month of October. This might seem strange, but traditionally in Japan, the season most associated with “horror” as a genre is arguably the summer, probably due to seasonal events like Obon – a custom of paying a visit to familial graves to honor ancestors/recently deceased, rooted in a belief that ancestral spirit returns to the world of living. And of course, the traditional performance arts that cover the topic of ghosts and horror cement this notion. However, not all horror stories necessarily invoke the spook via ghosts, but rather, curses and grudges by the living. This brought me straight down the rabbit hole of curses – what many (of us probably) deem as superstition, its unexpected verdict in the eyes of law, and a further surprisingly complicated view into what law perceives as possible and impossible. Spoiler alert: the world of law seems to already have given an answer to whether (at least some of) the occult practices are considered real or unreal (at least in some countries). Thus, here’s the “horror edition (?)” from CUSAP, and join me for the crazy ride of curses and the impossible crime.

Can there be a murder via a curse?

There’s a story I saw as an anecdotal opening in one of my favorite Jdrama reruns, and it goes something like this (apologies for the self-translation):

Soon after WWII, a farmer’s housewife in Akita prefecture was cought by a police, after stabbed a gosun-nail (about 6 inches) through a straw figurine with the name of the “other woman” her husband was cheating on, onto a holy tree in a shrine. The prosecutor side claimed that the housewife should be found guilty of attempt of murder for conducting the Ushi-no-koku-mairi ritual with an intent to murder, but the court could not prove the causality of the between the curse and murder. Thus she was only found guilty for intimidation. This was inevitably a moment that the law admit that a person can not be found guilty of murdering via curse.

Ushi no koku mairi (丑の刻参り) is a prescribed method of cursing, traditional to Japan. The name comes from the fact that the ritual is to be practiced during the hours of the Ox (1-3 AM). During the ritual, a straw figurine is pierced/hammered with a nail (often onto a holly tree in a shrine) with specified equipment and outfit. Notably, this ritual is supposed to be conducted with the intent to harm/kill the individual represented in the figurine. However, it is often said that if witnessed during the ritual, the person who is cursing will be inflicted with the curse, often resulting in death. While there are probably countless curing rituals across different corners of our planet, I would say this is one of the most iconic “styles” in Japan when it comes to cursing rituals, and it is heavily referenced and used as a motif in pop culture.

Anyway, back to the anecdote from the drama, watching this time round, I couldn’t help but wonder… is such a court case real? I searched across the internet to see if I could find a credible source on this incident, but there was no hit in the precedence database/newspaper archive of the prefecture. (But strangely enough, a query with a very similar story & question as mine was made to the prefecture’s library through a collaborative reference database in 2022… more on this later.) Therefore, it is probable that this anecdote is more of a fictional case to illustrate the central themes explored in the episode of the drama. However, I also noticed something interesting: there were a number of “similar” stories referenced in various TV shows, blogs, and books, with ever so slight variations, but none were apparently traced to an existing record of precedent available online. This led me to believe that there must be something at the root of all of these stories that caused such a scenario to be continuously referenced, so I kept digging.

Personally, I came to believe that these stories most likely did not stem from an actual court case, but rather a “thought experiment” outlined in an old editorial on law. A Japanese law study published back in 1934 (I’m no law nor old text expert, and it wasn’t an easy read) outlines how, in criminal law, cause and effect are seen as a key to establishing whether a given act is… criminal. It runs through several iterations of scenarios, but specifically uses Ushi no koku mairi as an example of a ritual that usually cannot be seen as a directed cause of death of the victim being cursed (bingo!). It further outlines that, however, it WOULD be considered as a cause of death, for example, if it meets the following conditions: 1) if the person being cursed also believes in the said superstition, 2) found out that they are being cursed, and 3) becomes paranoid in fear of being cursed and results in suicide. It also states, however, that essentially without these conditions being met, one is considered to “perform” such rituals in their own discretion as a freedom of individuals. The author points especially in reference to Declaration of the Rights of Man and of the Citizen (note: this is a time period in Japan where law and society at large was going through “modernization” and hence points to this French ideology as a casestudy-ish), that in their modern law, there is a point to be made for the law to protect the right of such individual to “perform” supersitious acts.

… So I guess in that sense, trashing your pillows in your own room in full rage post break-up and conducting this curse in complete privacy (with no blackmailing and vandalism… more on this later) is criminally not too different?

Impossibility defense and “superstitious crime”

Interestingly, it turned out that Ushi no koku mairi is, in fact, seen as a textbook example of superstitious crime variation of impossibility defense (yes, this is a WHOLE GENRE). What is considered as superstition (needless to say) and law itself do change over time and place. Unfortunately, I’m not an expert in either field of sociology or law, but I understood that in the case of Japanese law, supernatural crime is a subcategory of “impossibility defense” in which crime itself is seen as “impossible” because, well, superstition causing the desired outcome is seen as impossible. There are, however, a few premises that could be met as outlined above for a superstitious ritual to be deemed as a murderous crime, but on top of that, it needs the causality to be fundamentally possible, and this is surprisingly a complicated matter. A classic non-superstitious example (as described in this short lecture), was if an assassin shot what they thought was their target lying in bed, but in actuality, the target was not in the bed (maybe it’s a pile of pillows). Then, there is no way that this shooting killed the target even if this target was later found dead (ie, impossible). I will not go into the details of discriminating legal and factual imposibility (because it’s actually hard to draw the line, apparently), but let’s just explore the idea in itself that superstition is a subcategory of actions that act in premises of impossibility. There are apparently few “theories” on why this is justified (again, apologies for the self-translation).

  1. Conceptual (/subjective)-danger theory: Superstitious criminals pose no risk objectively speaking.
  2. Causality negation (/objectivity) theory: A crime cannot be committed because there is no causal relationship (ie, in line with the general notion of impossibility defense)
  3. Theory of Intent: This theory is used to point out how intent matters behind criminal action, thereby acknowledging murderous intent, for example, in the failed attempt at murder. However, for “superstitious crime”, it is believed that this theory won’t be directly applicable because one is relying on the power of the supernatural to actually realize the crime, on behalf of acting with intent themselves.

And regardless of the debate in law regarding the nuances and interpretation, I find the focus of causality to be almost philosophically scientific, or extremely logical at the very least. Such casualties are exactly what the court cases attempt to mount the evidence to approach the consensus truth. If rituals and acts cannot function as a direct cause of say death (in a consistent, replicable manner), the court will act with the premise that that is not possible in seek of a more possible explanation.

But are curses illegal (at all) if they’re ineffective?

I was quite impressed to see that by 1934, the study implied (and in some parts, clearly reads) that superstition-based rituals do not directly cause outcomes like death – ie, NOT REAL in the eyes of the law. So there’s the answer for you: curses are seen as ineffective. But then, is it legal? There are occasionally cases of Ushi no koku mairi that still make news headlines. This is because even if you’ve not successfully cursed someone to death, you entered a shrine (ie, not your property)? That could be a criminal trespass. And you hammered a straw figure, damaging a tree that doesn’t belong to you? That could be vandalism or property damage. And in fact, the latter was the exact charges a 72-year-old man was caught in 2022 who hammered a straw figure with Putin’s face printout stuck on it, in protest of the Russian invasion of Ukraine. So while you can’t (be charged with) murder with this curse, there are elements of illegal acts. Just to be clear, I therefore do not recommend going hammering down some straw figures, and I’d recommend sticking to the self-pillow fight within the confines of your own room. Nevertheless, “murder via curse” is not a legal defense/prosecution that can be made in the current law in Japan.

However, as I have hinted before, what is and isn’t considered a superstition changes over time and place. In fact, even in Japan, an earlier rendition of “modern law” in 1870 seemed to have stated that the cursing rituals could be prosecuted. This to me suggests, at least this time, it was reconginzed a more of a… canon threat? If the whole argument in the 1934- is that curse ritual itself is non-illegal because it’s fake, then being illegal suggests it’s… not fake? As I explored the place of superstition in law more widely, I found this handy Wikipedia page that outlines the state of laws against “witchcraft” (which I’m sure curses can be at least part of… right?), across history and countries. Interestingly, this trend of “repelling” the criminal prosecution of witchcraft (ie, was once illegal but now not) also exists in other countries, including Canada (in 2018) and the UK (1951)! On the other extreme, we have several countries that currently prohibit witchcraft and magic altogether as illegal, or conducting witchcraft against a person is illegal. There are also places where a certain subset of magic/practice (ie, black magic or fortunetelling) is specifically illegal. Then, there are places where pretending to be a witch or accusing someone as a witch is illegal. What I find fascinating is that, in fact, a lot of these laws against witchcraft, magic, and fortunetelling were followed a surprisingly (at least to me) non-linear history within the region and full of diversity. In some instances, it appears (anyway) that prohibitory law was put in place in an attempt to extinguish a malpractice that can be exploitative (in the UK, for example, it was initially covered in Fraudulent Mediums Act 1951 but eventually merged into a subset of Consumer Protection Regulations). However, in other instances, it may be possibly derived from fear from the authorities or desire to control (case and point: the Nazi outlawed fortunetelling in 1934, apparently). So, I don’t have a neat one-size-fits-all conclusion here, but it was interesting to realize how the state of law against witchcraft can deeply reflect the history and the current relationship of superstition and the local society.

Why do curses still persist?

Naturally, being a CUSAP member (and just to remind you, pseudoscience is arguably just a sciency superstition), I don’t think you can curse people to death because of fairy power or deviations. But I can appreciate that it could (and still in some cases) hold a strong presence in society, and in that sense it is very real to a point where it is even discussed as an example scenario in laws. And personally, I don’t think this is just an artifact of historical beliefs pre-dating scientific methods or modern procedures of law. I (unfortunately?) think curses, magic, and witchcraft persist because there is (and possibly always will be) a demand.

Of course, some people might curse because they are… well, mean. The classical evil witch. The Grinch type wishes the worst for everyone. But let’s go back to the initial anecdotal case of the farmer’s housewife. Something I didn’t mention is that Ushi no koku mairi is somewhat associated with women, not so much because there’s anything feminine about hammering a nail, but a typical reason behind it, like lovers’ betrayal and jealousy, was often not met with justice. Particularly in recent Japanese history, a wife committing adultery in itself was punishable by crime (until 1947 when it was revoked due to gender inequality), while it was often not the same for men. When you faced a betrayal, but there’s no place in law where you will be compensated or meet justice, won’t justice by supernatural means seem all of a sudden alluring? Somewhat unintuitively, with the access to the internet, some say the witchcraft service and market is bigger than ever (I’ll go down this rabbit hole some other day), and unfortunately, it seems to be filled with “wishes” from people who had their hearts broken and were one way or another, hurt. I’ve also seen online threads recommending different curse methods, and they’re often filled with comments of pain, grudge, arising from things like bullying.

Again, I’m not saying grabbing a hammer or drawing summing circles in the next instances of someone cheating on you is a good idea, but I can also see how this can perhaps help people release their steam in a relatively safe way, perhaps in part because it’s most likely not going to do anything. To me, the bigger concern is how this emotion of vengeance might consume someone (mental well-being-wise), and it also can make someone ripe for exploitation (ie, how do you know that one-time purchase of curse survival of fortunetelling will not snowball to spending a fortune to purchase magical devices and expensive charms?). Ultimately, the answer to the world that CUSAP envisions and a world without curses may have quite a bit in common: it’s a world of compassion and open dialogue, which is fundamental to justice based on truths. Realistically, this is not an easy feat, so magic and curses might persist or even prevail, especially in this ever-online world where injustice and disparity seem to be increasing. So next time you see a broken heart or someone being a bit too serious with their fortunetelling reads, remember the position curses in the eyes of the law, and take them out for a coffee. Who knows, maybe that will be the perfect (legal) white magic to make their day better 🙂

Science and extreme agendas

Author: Raf Kliber (Social Media Officer)

Original feature image art specially drawn by: TallCreepyGuy

While I work myself to boredom at a local retail store, I listen to some podcasts in the background. Something to cheer me up. Among my favourites are the Nature Podcast and Climate Denier’s Playbook. But, on that specific Wednesday, the episode was anything but cheering. I landed on the Nature Podcast’s “Trump team removes senior NIH chiefs in shock move” episode, which provided me with a bleak look into the current US administration’s proceedings. The bit that shocked me the most was how much the move clung to Project 2025‘s agenda. One of the moves discussed was a defunding of ‘gender ideology’ driven research (read anything that includes the word trans, even though such research is useful for everyone). Furthermore, instead of such ‘unimportant’ research, the administration wanted to conduct studies into ‘child mutilation’ (read trans conversation therapy) at hospitals. Eight hours later, while soaking in a mandatory afterwork bath, I began pondering “what is the interplay between extreme agendas and the ‘fall’ of science?” and “what I, a STEM person, could do about it?”. As a Polish person, my first bubbles of ideas started with fascism and the Third Reich.

Jews, fascism, and ‘directed’ science

I moved to the UK when I was twelve years old. This event spared me the traditional trip to Auschwitz one takes when in high school. It spared me from the walls scratched by the nails of the people trapped in gas chambers. It spared me from the place so horrible yet so pristinely preserved that visiting it is as close to time travel as one can get. About a fifth of the population of Poland was wiped out in World War II. On average, every family lost someone. Not on average, many families were completely gone. Due to the gravity of the topic at hand I reached out to Dr. Martin A. Ruehl, lecturer in German Intellectual History at the Faculty of Modern and Medieval Languages at University of Cambridge for some guidance. He also gave a talk on “What is fascism?” during the Cambridge festival, which I recommend. Another reason is that I am by education, a physicist, and just as physicists have their own set or rigorous habits that make their field solid, historians and philosophers have theirs.

Fascism as an idea is fuzzy, or at least with fuzzy borders. One knows definitely that after Hitler took over the power in Germany, it took on Fascist ideology. It is also abundantly clear that the current UK is not a fascist regime. Trying to nail the border delineating the least fascistic and just about not fascistic regime is futile, complicated further by each regime having their own unique element. The process of how it festers and develops in a country is left for others to explain, and I encourage the reader to watch this video essay by Tom Nicholas on how to spot a (potential) fascist. I will go with the conclusion of Dr Ruehl’s talk. Fascism is a racist, nationalistic, extreme and violent idea that often puts the core group in a self-imposed theoretical attack from the outgroup. (e.g. Jews were an imagined threat to the German state, even though they weren’t). I procrastinate talking about subject matter to highlight two important points: Fascism is a complex topic that could be studied for lifetimes and consequently, I am not an expert. I have made my best attempt at giving it the due diligence it deserves.

Disclaimers aside, what was the state of science during Hitler’s reign? Let us set the scene. The role I’d like us to play is that of a scientist at the time. Let us imagine ourselves in 1933 Germany, right at the beginning of the Nazi reign. Nazi party made it rather clear: Either you, as the scientist, are ready to conduct research that aligns with the party’s agenda, or you’re out of academia. Unless you’re Jewish and known to be on the left of the political spectrum (historical pre-nazi left, although it would still include things like early transgender care, for example, as advocated by Magnus Hirschfeld), then you don’t get a choice. Physics Today has a nice article that contains the migration of selected physicists out of Nazi Germany, which I recommend having a look at. Similar goes for other branches of science. The crux of the situation is that if you are studying races or ballistics, you are more than welcome to stay. Hitler did recognise that only the most modern military equipment would allow for the Third Reich to wage war on everyone. Similarly, he did want to put his ideals onto the firm foundation of “cold and logical” science, even though at times that compromised the scientific process. For example, the creation of Deutsche Physik (which denied relativity) and the burning of books by the above-mentioned Magnus Hirschfeld. (As much as my past self would thoroughly disagree, trans people are a cold and logical conclusion of how messy biology can be. More so than arbitrarily dividing all of population into two buckets.)

The adoption of the idea of Social Darwinism (that fittest social groups survive) and the knowledge of what genes do (albeit well before the discovery of DNA structure and the ability to compare genomes) created the foundation of ignorance for ‘scientific racism’ and eugenics. That being said, there was more to it than the current state of not-knowing. According to the introduction of “Nazi Germany and the Humanities” edited by Wolfgang Bialas and Anson Rabinbach, “Creation of the hated Weimar Republic created a deep sense of malaise and resentment among the mandarins, who, for all their differences, had in common the belief that a “profound ‘crisis of culture’ was at hand””. To draw a conclusion, the loss of the war and a tense national atmosphere led to the development of such völkisch ideals way before Hitler’s regime touched the ground. To further quote, “many retained the illusion of intellectual independence”. The general sense of superiority also gave rise to books like Deutsche Physik, a work that opposed Albert Einstein’s work directly.

(Note from the author: Googling “Social Darwinism” will lead you to creationist videos by Discovery Science (A YouTube channel by Discovery Institute, a fundamental creationist think tank). They seem to be hooked on using the aforementioned atrocities to try to link Darwin, and his early understanding of evolution, to Satan and hence to him leading us away from God with his theory. It is worth mentioning that although it bears his name, Darwin did not play a role in coining or using the term.)

To summarise this section: The way the corrupt ideals spread into science and politics in Nazi Germany arose from discontent and false hope. It was more of a fork situation. Both the world of academia and politics took up the story of national threat and superiority due to high levels of discontent originating from the Weimar era, and while intertwined together, I think that the cross-influence only amplified the process. This resulted in academia and politics taking up both ideals independently, and simply supported each other in the downward spiral such as antisemitism.

USSR, Russia, and limiting scientific cooperation.

A nice cup of tea on the following day led to some more thinking about other regimes. Like a true ‘Brit’, I took out my teapot and with a cup of Earl Gray in a fancy Whittard porcelain in my hand, I drifted off again into another rabbit hole. This time instead of west, I dug the tunnel east.
An interesting tidbit from my past regards my primary school. The changing rooms in that place had an interesting design. If one were to pay enough attention, they would see a system of grooves in the floors that were meant to act as drainage. Why drain something from an indoor location? The changing room was meant to serve as an emergency field hospital in case of another war. The school turns out to be old enough to see some of the old soviet practices in its design. For those unaware, Poland was part of the Soviet bloc up until 1991. Just 12 years before my birth, and 13 before Poland joined the EU. So let us journey to the east and see what history has to teach us.

Stalin was a dictator, just like his Austrian-German counterpart. What is slightly different is the ideology that shaped the persecution of scientists at the time –  a different flavour of extremism. I could go on a rant about what Stalinist flavour of Marxism is, but just like Fascism, there are scholars who spend their lives studying it. I am not one of them.

Nevertheless, the parallels between the corruption of sciences in Fascist Germany and Stalinist USSR are rather staggering for such different ideologies. In Germany, anything considered Jewish or going against the greatness of the Aryan race was immediately cut out, while the rest was bent towards the leading political party’s view. Here it was much the same. The humanist subjects took the largest hit in independence, as those in Germany. Lysenkoism played a role in slowing down the genetics research in the USSR. Instead, what followed was an increase in Lamarckism (acquired characteristics are passed on, rather than typical natural selection). This then, possibly, contributed to agricultural decline, creating another subject of memes for the edgy GenZ.

This also led further to isolation of the scientists. While every now and then they would invite foreign scientists (as Feynman wrote in his letters, and let us be honest, this might have been because of his involvement in Los Alamos) the mingling of Russian scientists with the rest of the world was minimal. Did I forget to mention that geneticists were often executed for not agreeing with Lysenkoism? Science is a global endeavour for a reason. It needs way more manpower than any country alone has. A country can never be a fully independent branch, it will simply lead to a slow withering of progress.

To have a nice circular structure in this section and bring it back to my home: Attitudes can also persist after occupation. The Polish government made some unpopular moves in academia during the time of the PIS party. Polish academia uses a scoring system, where each publication in a journal grants you points. Each point tries to quantify your contribution to a field. So technically a biochemistry paper would give you points in both biology and chemistry. They started awarding more points for papers in Polish journals rather than international ones, alongside some mixing of awarding points in political sciences for publishing theology papers. This may be seen as a slight resurrection of the national pride in sciences which I despise so much (Springer Nature’s journals are always going to be my favourite to skim through).

So what?

My Eurocentric summary of history is probably boring you to death. Let us talk about the US. Trump! The name that makes my hair stand on the back of my neck. The similarity of what is currently happening in the USA really makes me think that history does indeed repeat itself.

Firstly, just like Lysenko and his anti-genetics, Trump decided to elect RFK Jr as the minister of HHS. A well known opponent of vaccines is in a position of hiring and firing researchers. The MAHA (make America healthy again) report included a lot of less-than-optimal healthcare research directions. RFK really believes in a mix of the terrain theory (that the terrain of your body i.e. fitness and nutrition, play THE most important part of your immune system) and miasma theory (covered in a previous article here, but basically a medieval theory on bad air making you sick). There are a whole host of reasons for a person to also point out that a recovering drug addict and brain tapeworm survivor does not make for a great leader for a health agency. To be a devil’s advocate though, he did come up as an environmental lawyer. Additionally, RFK supports removal of fluoride from water and has helped to spread misinformation about vaccines in Africa. He has a very tangible body count and actively harms populations.

Secondly, there are the topics from the headlines in the first section. It is clear that the current administration’s aims are not simply doing science to explore x, but rather confirming x under the guise of science. This is why 75% of scientists that answered Nature’s poll said that they are looking to move out of the USA. Additionally, in a piece by the New York Times, experts in Fascism are also moving away from USA. It is something that is now consequently causing the ‘brain drain’ in the USA and, ironically for an administration that is anti-China, hands over the scientific majority to China. (Whether you think that is good or bad, is up to you. I personally am neutral.) Additionally, the administration has already tried to block Harvard’s ability to admit international students which contribute heavily towards their income stream – all in retaliation for Harvard allowing students to express their right to free speech and protest in favour of Palestine. This is slightly more sneaky than executions and imprisonments. Nevertheless, in a capitalist society, it might be somewhat equivalent when the funding we all depend on goes dry.

Lastly, there is a difference I would like to point out. Regimes like the one above often arose from a dire need for a radical leader and major changes. The current administration is exercising what I would like to call stealth authoritarianism (as coined by Spectacles here). Gone are the days of having posters with long-nosed depictions of minorities that eat children on every street (although the ‘they eat the dogs’ moment was close enough for many). The current US president is using rather specialised and closed off social media to reserve their opinions to their most dedicated followers rather than the general public. We live in the age where the algorithm separates us. It is becoming ever less likely to encounter an opinion we disagree with out in the wild without searching for it. Executions are no longer needed to silence the critics, for as long as you have a devoted fanbase, the infectiousness of the internet can create a potent and numerous enough group to win the election.

The fact that someone can be so overtly against reality, so blatantly corrupt, yet at the same time can feed a mirage to the right people to get elected is the true curse of the modern information landscape. For me personally, it is the main reason why CUSAP and similar societies are more important now than ever before.

What can you do

Every good opinion piece should end with a call to action. I also don’t want this entire blog post to be a long way of saying “AAAA WE ARE ALL GOING TO DIE!!!”, because we most likely won’t.

  • If you are in the USA and courageous enough, protest. It should be easy enough to find one nearby. This is not the main recommendation. Police brutality has already made itself visible in the past month.
  • What you can do more safely is support local lobbying. Be prepared that democracy is not as accessible as it seems. Genetically modified skeptic has posted their experiences trying to vote down the requirement for schools in Texas to have the 10 Commandments in classrooms. It was not a pleasant experience, but organisation and support for lobbying individuals can go a long way. Even if it means bringing them food and supplies or sitting in to notify them when it is their turn to speak at meetings.
  • Vaccinate your family against misinformation. The emotions can run high when politics are involved, but perhaps you can connect one bit of their viewpoint to that kernel of truth that may help. My personal jab at right-wing oil enthusiasts is to connect it with their dislike of migration, as this is a likely result of climate change. (Yes, I don’t believe migration is bad, but they do. Sometimes, you have to engage one topic at a time.)
  • Join a group to lobby and promote critical thinking. Here at CUSAP we try to go beyond Cambridge; thus we welcome articles written by non-members. You can get in touch with us at the https://cusap.org/action/. Youth against misinformation is another one. Plenty more can be found online.
  • Most importantly, do not shut up. Speak up when you see fake news. Don’t get distracted by trivial problems. Call your local political governors, meet with them, email them. This goes regardless of which party they are associated with. Make sure that they know that the truth is what you support. (It goes without saying, as long as you feel safe to do so)
  • Lastly, for my own sanity: do not be nihilistic about how little significance one action or vote has. One vote can make a lot of difference when it is surrounded by a couple thousand more singular votes.

Alpha (??) Male (???)

Author: Maya Lopez (Blog Chief Editor)

When I was video calling my parents recently, I noticed that a wildlife documentary was playing in the background. The documentary was on some pack of wolves and followed the tale of a dominant leader who got injured and left the pack, so the next oldest sister stepped up and led the pack. “Leader”. I was actually impressed by the up-to-date wording, reminded me of the story I saw a few years back on the term: “alpha wolves” – and how such outdated remains ingrained in our society.  But also, did the documentary just say sister stepping up as a leader? This led me straight back to the memory hole and some reading in between my deadlines, where I rediscovered the tale of science finding that was embraced by a culture, but culture/society refused to evolve with the scientific updates. Given the modern (and possibly unsustainable) rise of “manosphere” and loneliness epidemic, especially amongst young men (of course, while not uniquely exclusive to men) are believed to be linked to the current political climate and radicalization, we’ll explore where we got this “alpha male” myth often dubbed to be backed by “evolutionary science”. And this turned out to be an emblematic case where culture arguably sought after the label of “scientific” to affirm and add prestige to the social construct that some people wanted to desperately believe, and how this is much more difficult to falsify and update than actual scientific facts.  

“Alpha wolves” finding and its correction status

So, where did this all begin? It is no coincidence that alpha-male to this day is often represented as a wolf emoji, as seen on Wikipedia. 1971, L. David Mech , a zoologist specializing in wolves, observed that a strong dominant wolf seems to be leading a pack. And he published his findings in a book called “The Wolf: The Ecology and Behavior of an Endangered Species”. I couldn’t find the exact record of how many copies were sold, but it had numerous reprints and digital releases until it got taken out of print in 2022.  This is perhaps a testament to its influence in all these years in the competitive world of publishing, and essentially popularized the terms alpha wolves, so I think it is fair to say that the book was super well-received in public. Personally, I think this level of success with science communication in itself is indeed remarkable. Perhaps, being the 1970s when eco-consciousness was on the rise and even “Earth Day” was born from a public movement, the conversation about ecology and endangered species was at the right time. However, the cultural impact (unfortunately?) lies well beyond the realm of ecology, leading to the connotation of this term to characterise a specific imagery of wild, dominant, aggressive (?), masculinity throughout the upcoming years.

The book outlines various facets of studies in wolves across different chapters- from their wild life distribution to the pack structure. The term alpha was introduced in a context to describe an apparent leader in a pack that seemed to have achieved its status by dominating others in the pack. Interestingly, this term was used in a similar sense in a report published in 1947, Germany, so Mech’s book arguably stemmed from a long lineage of academic writing that held this prevailing theory of a wolf pack hierarchy. Also fascinatingly, “beta wolves” in this context is de facto #2 in the group (quite a different nuance from modern internet slang, but I’ll get back to this in a sec). But there was a big caveat to all of these studies: they were based on wolves under captivity – an artificial setting, often with individuals of non-blood relatives boxed in the same environment. So while “alpha wolves (and corresponding female pair)” emerged in captivity, when researchers expanded their search and saw if this is also applicable to their natural state, things went awry.

Like in many natural science findings, the alpha wolf finding was actually corrected and updated in a later decade. In fact, the interesting thing is that this falsification came from Prof. Mech himself (in what I call a true scientist fashion)! Upon his further investigation of wolves, he discovered in the 90s that the natural wolf hierarchy is, in fact, just a family. In this context of kinship, bloodshed and battle for the dominant position were rare. In an interview piece from New Yorker, an associate research scientist with a National Park Service research program in Yellowstone,  Kira Cassidy sums up the current notion of the “wolf hierarchy”:

“It’s not some battle to get to the top position. They’re just the oldest, or the parents. Or, in the case of same-sex siblings, it’s a matter of personality.” 

It’s easy to imagine how parents, naturally older and more experienced, lead the pack, and their offspring follow their lead. Mech himself was one of the most vocal proponents to refrain from using the term alpha wolves because “it implies that they fought to get to the top of a group, when in most natural packs they just reproduced and automatically became dominant.” In ‘99, he tried to describe these parents as having “alpha status”, and eventually the field stopped using the term altogether. If you check the International Wolf Center webpage today, you’ll see it being described as “outdated terminology”. Modern research also finds that in natural reserves where pacts occasionally fight for territories, they can observe rather extensive pacts, including aunts and uncles, and multiple “breeding pairs,” making the structure more flexible and less hierarchical. Furthermore, even these leader positions are essentially not about aggression but rather more about responsibility, and submission is more of a chain reaction mannerism rather than an all-hail-and-serve-the-dictator attitude. To quote the Scientific American’s article, “The youngest pups also submit to their older siblings, though when food is scarce, parents feed the young first, much as human parents might tend to a fragile infant.”

When culture decides not to update based on new findings

So okay, alpha wolves weren’t really a thing unless you split up families and smush them into the same room, and natural leaders aren’t really about aggression and bloodshed. If this tale were as famous as the concept of the alpha male, then it would’ve been a great example of scientific falsification updating the societal norm, but that was not the case. What starts the application of the concept/term of alpha to humans is arguably NOT the wolf book I mentioned earlier, but the book published in 1982 called Chimpanzee Politics: Power and Sex Among Apes, where the author implies that his observations of a chimpanzee colony could possibly be applied to human interactions. But the thing is this term was still mostly in ecological contex (and not applied to discuss human interaction) till around late ‘90s where on top of the wolf example described above (which gives the pack leadership imagery), it also applied in other non-social animals, particularly to refer to male’s mating privileges due to their ability to hold territory, win food consumption, etc.

Then, who popularized this chimp/wolf term to describe a human male? I couldn’t access the actual source article that did this, but it was mentioned in Wikipedia that around the early 90s is when alpha referred to humans, specifically to “manly” men who excelled in business. But the recorded most pivotal moment in (pop?) culture is perhaps the ’99 American Presidential election campaigns, incidentally the same year Mech denounced the alpha wolves concept.  According to journalist Jesse Singal, from New York magazine, the word entered the public consciousness on a mass scale that year when a Time magazine article published an opinion held by Naomi Wolf, who was an advisor to then-presidential candidate Al Gore.  The article describes Wolf as having “argued internally that Gore is a ‘Beta male’ who needs to take on the ‘Alpha male’ in the Oval Office before the public will see him as the top dog.” Naomi Wolf herself, for context, was a prominent figure in the third wave of the feminist movement, with publications like The Beauty Myth in 1991.  But from around 2014, journalists started to describe her reporting on ISIS beheadings, the Western African Ebola virus epidemic, and Edward Snowden as containing misinformation and conspiratorial, and in 2021, her Twitter (… okay, “formerly-known-as-Twitter”) account was suspended for posting anti-vaccine misinformation. Her Wikipedia page now includes a title: conspiracy theorist.

Singal also credits Neil Strauss’s 2005 book on pickup artistry for popularizing alpha male which sedimented the aspirational tone of the alpha male as a status, but I think the pattern is clear: a frankenstein mish-mash of an outdated scientific-concept (literally, revived from death if you think about how the term was dying out in wolves research) and some vague sense of aspirational male figure that encaptulated the “cool” of the era has entered the lexicon, carrying the prestige of “science word” (not entirely untrue but leaving out the many big caveats mentioned above). And once things become a culture, it is hard to change, despite culture, if you think about it, is inheretaly in constant flux in the history of homo Sapiens. I’m not saying all cultures are bad; certainly not: it’s collective behaviors that have adapted throughout history. However, we often use “well, that’s the culture” as a reason to defend practices even after we, as a society, gained the means and the knowledge on how wrong or even harmful some things could be.

The correction status of alpha males (?) in other species

But wait, did you notice how this conversation of dominant status eventually became specifically about dominant “male” status? Where in the world did our social image of the alpha male even come from? Ultimately, it seems that we didn’t want to dismiss this idea of the almighty dominant male. Even to this day, if you Google “myth of alpha male,” you can find Reddit threads with comments that “acknowledge” that it is outdated and untrue in wolves, but people often ignore the male dominance found in Great Apes. Sure, male dominance CAN be a thing in great ape like Gorilla silver backs I guess (but note they get their own fancy title), but the implication made here seems to be that “wolves don’t matter because wild life closer to humans shows alpha males so we human males should also have alpha nature too errrr”. But what if the underlying assumption about the domineering male in relative species… does not hold as well as you think?

So let’s go back to our assumption about relative specie chimps and see if the assertion from the 80s holds true. Long story short, once again, like in the context of wolves, it turned out the scientific reality is more complex than the earlier rendition of it. Chimps are social creatures, like wolves and humans, and, indeed, there is often an alpha male in a group with mating privileges. But dominating other males with power and bloodshed turned out to be not the only way to achieve the top status – one can groom their way to the top. 2009 research found an interesting correlation with different males and their “styles” to achieve their status. Essentially, they saw that smaller chimps with perhaps less intimidation power compensated for this by grooming other members more frequently and equally. This also speaks to the complex nature of the alpha status too: they’re also judged by the other members of the group – in effect, being a popularity contest rather than a pure dictatorship. So while alpha male is a thing, it does not have to be the pure aggressive type that we typically imagine, and the stereotypically “beta-moves” might totally be his strategic winning move.

Let’s also interrogate the other half of the phrase: does it even have to be alpha “males”? Our other equally close relatives, Bonobos, will tell you otherwise. They, in fact, are often termed a matriarchal society for often being led by an experienced female senior(s) as a leader in the wild. In such enviornment, a routy and aggressive male that gets too excited by a presence of fertile female in fact can get his butt kicked – or more like toe bitten off in the extreme case by the experienced females who might gaurd such young female. This was the case for the group of bonobos in Wamba forest in the Democratic Republic of Congo, and his social position in the group plummeted. While the toe-bitten level of fight back is unusual, Dr. Tokuyama describes that “Being hated by females … is a big matter for male bonobos,” as the alpha male attitude here giving unwanted & violent sexual provocation is often met with a strong resistance by the females who woud band together to fend off such behavior.  As a homo Sapiens, I can’t say that this an-eye-for-an-eye tactics lending itself to violence is ideal, but, it is interesting to see an entire specie dynamic where aggression of male that evokes alphaness is arguably seen as reckless, meeting a stong resounding: NO.  

Can we finally update our alpha-male myth?

During my teenage years, I almost got the impression that alpha/beta categorization is increasingly becoming… cringe – a hype, a target of satire that became no longer cool upon oversaturation in the internet lingo. But the modern narratative around manosphere, while not mainstream (… I hope), is hinting otherwise. The very definition of masculinity for some people is somehow seeming more aggressive, dominating, and hierarchical. While such views may have always existed to some degree, highly visual-focused trends nowadays seeping into youth culture are perhaps accelerating this issue in a possibly dangerous way.  Perhaps alpha-male is too catchy, too photogenic, too trendy at this point to go out of fashion overnight (and in fact, during research, I found it immortalized & perpetuated in courses, coachings, and AI characters!).  And you know, as a story archetype (and possibly some people’s …let’s say “romantic type”), I can see some point – but maybe we can leave that to the realms of Wattpad’s Twilight spin-offs.  And I feel something inherently sad about reducing complex human social behaviors and the multidimensionality of personality we can have as REAL individuals to be reduced to a simple slogan and the law-of-the-jungle type of mindset, all with an undertone of violence and a dog-eat-dog world view.  

With simple slogan perpetuates a simple view of the world; an easy pill to swallow compared to a mentally demanding task of critically assessing social constructs.  After all, we are all facing a historic level of exhaustion and work demands. However, next time a trendy catchphrase from a view “supported by (evolutionary) biology” creeps up into your feed, let us ask ourselves, what complexity are we removing and at what cost?  Constant refrain from a critical reassessment of our own culture around us could quickly spiral true subordination of mind, ripe for exploitation (…thus very un-alpha if you ask me).  So let us practice our critical thinking and be wary of narratives that sound too… black-and-white.  Maybe science can help you update and be more flexible with thinking, because hey, science is ultimately unafraid to evolve and update, and so can we.


War on Paracetomol

Author11: Isha Harris(Co-President)

Paracetamol doesn’t get nearly enough credit as a wonder drug. While not as acutely lifesaving as penicillin, the quality of life improvement multiplied by the billions of people who use it means that paracetamol offers a pretty insane contribution to human wellbeing.

At any hint of a headache, I pop a couple pills, and am sorted out in 20 minutes. This saves me a day of pain, and the accompanying physiological stress – the blood pressure spikes, heart rate increases, and general bodily strain that prolonged pain can cause. It’s possible I go overboard with the paracetamol: before an exam, I usually take a few just in case a headache strikes. There’s probably a <1% chance of this happening, but given the huge stakes of remaining headache-free for the exam, I figure it’s worth it. I’ve also carefully optimised my coffee regimen, balancing the optimal buzz with avoiding bathroom breaks. So I arrive at every exam drugged up, ready to lock in. Maybe it’s just the placebo effect of feeling like I’m doping, but if it works it works.

This habit has been received extremely badly by friends and peers. Most people have a much higher threshold for taking paracetamol than me. They gasp at my willingness to take it for ‘minor’ discomfort, and if I suggest they do the same, I’m met with various justifications: toxicity, tolerance, making the headache worse. Or the classic ‘just drink water’, as if hydration and medication are mutually exclusive. Instead of resolving their discomfort quickly and safely, they’ll endure hours of decreased productivity or outright misery.

I think this is quite bizarre, and have always just assumed they were wrong and continued to sing paracetamol’s praises. But this is admittedly quite vibes-based of me, and as a good empiricist, I figured it was time to look into the data before I continue to assert that I’m right. Here’s what I found.

On paracetamol toxicity:

  1. For patients without prior health risks or sensitivities, paracetamol causes few to no side effects at recommended doses. A paracetamol dose has a few slight immediate side effects. For example:
    • 4 mmHg BP increase in already hypertensive patients. Ref
    • ALT (a liver enzyme) levels rise slightly, but this is comparable to the effect of exercise. Ref
  2. Prolonged, daily use at maximum dosage *might* pose risks. Long-term use has been linked to possible increases in blood pressure and cardiovascular events, though findings are inconsistent. For example:
    • Using paracetamol for more than 22 days per month raised the relative risk of cardiovascular events by 1.35 in smokers but showed no increased risk in non-smokers. Ref
    • Some studies suggest a potential association with cancers like kidney and blood, but again, evidence is limited.
  3. Medication overuse headache, or ‘rebound headache’, is a genuine risk for very frequent users. With time, regular overuse can lower your baseline pain threshold, leading to persistent, often severe headaches that don’t respond well to analgesics. It can be seriously disabling. But in the case of paracetamol and ibuprofen, MOH typically only develops after taking it on 15 or more days per month for months or years. Significantly higher than the occasional use I describe.
  4. Paracetamol is safer than other painkillers. Ibuprofen, while still extremely safe, has higher risks of stomach irritation and other adverse effects. Ref
  5. Overdosing is very dangerous. Paracetamol has a narrow therapeutic window, meaning the difference between an effective dose and a toxic one is small. Excessive intake can cause severe liver damage. Ref

Some other common myths:

“It interferes with your fever, which we’ve evolved for a reason.”

  • The data suggests paracetamol might only slightly prolong the duration of an illness (a few hours), if at all. Ref

“You’ll build a tolerance, and it won’t work anymore.”

  • I couldn’t find any studies at all that suggest paracetamol tolerance.
  • Paracetamol works via COX enzyme inhibition, not receptors like opioids or caffeine, so tolerance couldn’t develop by the same mechanisms anyway.

“Pain is natural, and good for you! It’s better to let your body build resilience.”

  • While much is said about the risks of taking paracetamol, few people talk about the cost of untreated pain.
  • Pain isn’t just unpleasant – it’s physiologically damaging. Ref It triggers the stress response, engaging the sympathetic nervous system and releasing adrenaline, which raises your heart rate and blood pressure. And it makes us miserable – mental state is a huge, and overlooked, predictor of human health.

In conclusion, paracetamol is incredibly safe when used correctly. Occasional, moderate use – like my once-a-fortnight headache relief – is nowhere near the thresholds associated with risk.

Purity culture

I think that the aversion to paracetamol is a symptom of modern purity culture. There’s a growing tendency to glorify ‘natural living’, and to believe that struggling through life without help from modernity is something we should strive for. I disagree – enduring pain unnecessarily doesn’t make you virtuous; it’s just bad for you.

There are plenty of other examples.

  • Reluctance to use epidurals during childbirth. And the rise of home births. Epidurals are safe; home births are not. But people have got it the wrong way round, because they assume natural = good.
  • Washing your hair less is good for it. I too was taken in by this as a teenager, enduring greasy hair and being miserable for days. But one day I remembered I have free will, and didn’t actually have to live like this. And I have seen no difference in my hair whatsoever.
  • The ChatGPT backlash. Camfess is currently embroiled in AI debate, with Cantabs coming up with all kinds of bizarre reasons to be against it (water/energy use, Big Tech and capitalism is bad, sanctity of art, weird claims about training data being exploitative).

The obsession with preserving ‘sanctity’ is maddening. Clinging to tradition for its own sake; suffering through inefficiency for strange abstract reasons of nobility. I hear this depressingly often from my fellow medical students, who claim that a future of AI in medicine threatens the sanctity of the patient-doctor interaction. But if AI can deliver zero wait times, more accurate diagnoses, and better outcomes (as the evidence suggests it can) doctors are Hippocratically obligated to endorse its rollout.

I have a hunch that this purity culture is a legacy of religion, which has a habit of resisting perfectly benign pleasures, like masturbation, for no reason. A lot of people around me are turning to Buddhism (Ref), which I find the whole shtick to be arguably the endurance of suffering. Each to their own, but it doesn’t seem like a very pleasant life, or really that necessary.

Humans have always resisted change, clinging to the familiar even when it doesn’t serve them. It’s why progress, whether in technology or social norms, is so often met with opposition. This is even true amongst many progressives, who are bizarrely circling back to conservatism on many fronts. The vast majority of the AI luddites I have encountered are leftists.

It’s such an exciting time to be alive. Technology and medicine make our lives easier, freeing up time and energy for productivity – or simply pleasure. So embrace it! Life is for living, not enduring. This means using the tools available to us, and supporting innovation to make even more.

The moral of the story: don’t lose an entire day to a headache. Pop that paracetamol.

↩︎

  1. This article was originally posted on Co-President’s personal blog and adapted for publication here for CUSAP.   ↩︎

Plague doctors were onto something?? (albeit for a wrong reason)

Author: Maya Lopez (Blog Chief Editor)

On June 13th, 1645, George Rae was appointed as a second plague doctor in Edinburgh. This was following the first doctor John Paulitious, who died due to, well, plague. While plague was already an endemic disease in the 17th-century UK, this outbreak was one of the worse ones. The 11th major outbreak in Scotland and over in London, this particular outbreak was also known as the Great Plague of London (albeit the last of this scale, hence the name rather than having the highest death toll than earlier iterations). With the rising death tolls in the city of Edinburgh (which will ultimately culminate in 1000s by the end of this outbreak), it was not particularly surprising that the doctors themselves would die from contracting the plague. Such (increasingly) high-risk jobs naturally saw a salary raise, culminating in a monthly rate of a whopping 100 Scotts a month by the time Dr. Rae was appointed. However, Dr. Rae survived his term, and thus he was only paid his promised salary slowly over the decade after the plague epidemic ceased after negotiation. This is not to say that the city council provided a generous pension after his civil service, but rather the council simply did not have the cash to pay him on the spot because, well, they didn’t expect that he would come out of the pandemic alive! (It is believed Dr. Rae never received his full share in the end.)  Is this to say that he was just a lucky soul who had a super immune system? When I heard of this fascinating tale of the man who once walked the narrow streets of Mary King’s Close, Edingburgh, I was extremely fascinated by his secret of survival in a disease where with the bugonic plague, you have roughly 50:50 chance of survival and if it was pnumonic plague, well… it’s nearly always leathal with the treatment options available at the time. So to me, this spoke, he avaded contraction itself – but how? He was actively going out of his way to inspect the sick, and these ultra-narrow, multistoreyed, alley-houses are not what I would call the best example of well ventilated environment. And his most likely secret (of course, it may be that he did have excellent health and an immune system) was no other than the iconic symbol of plague doctors – their outfit.  

How they thought you could catch the plague in the 17th century

Let’s go back a step into the 17th-century body of knowledge about plague. At this point, it was already an endemic disease with multiple outbreaks for centuries, so it was not a completely foreign disease in Europe. While by this time, the Renaissance and Enlightenment were slowly recovering the knowledge loss and new knowledge delay throughout out Middle Ages in Europe, a lot of their medical knowledge was still mostly based on classical antiquity and the Middle Ages, which naturally framed how they perceived and viewed the mechanism of the plague. Plague was thought to be spread based on Miasma – an abandoned medical theory where “poisonous air” (often of bad odor) carries the disease. This theory was deeply rooted throughout the Middle Ages and was the predominant theory used to explain outbreaks of various contagious diseases (like cholera, chlamydia, or the Black Death) that occurred prior to the advent of the germ theory. Additionally for plague, this miasma theory was further combined (?) with astrology in 14th centrury France to elaborate on its mechanism, where 1345 conjunction of “hot planets” (apperantly Mars, Saturn and Jupiter…don’t ask me why) in the zodiac sign of Aquarius (a wet-sign!… whatever that means) took place. This supposedly caused unnaturally hot and moist air to blow across Asia toward Europe, leading to the catastrophic Black Death. While I’m not sure if such a cosmos-level mechanism has been described for EVERY plague outbreak, the idea linking it to some sort of bad things coming from pestilent air was the general view on how the disease came to be, and this naturally affects how the disease prevention would be approached.

When it comes to how people thought the plague manifested in our bodies, this explanation was often based on humorism. This is yet another abandoned medical system that originated from ancient Greece and was upheld throughout Europe and the Middle East, nearly consistently for 2000 years, until, again, cellular pathology explained things otherwise. It is a fairly complex system (and I am NOT going to explain the full details today), but essentially, the plague, like many diseases, was thought to be a bodily result of imbalances in the four humors that constituted our bodies. Particularly, the doctors identified that with the bubonic plague, which results in bubo formation (the stereotypical pus-filled swellings) especially around groins, armpits, and neck, and saw this as evidence of the body attempting to expel humors from the nearest major organs. This results in historical treatments that focus on “expelling” these bad humors by bloodletting or diets and lifestyle coaching that will balance the humors (like cold bath + avoiding “hot foods” like garlic and onions (???) apparently). It was also said that some doctors (and religous services?) provided additional service at a fee, which may include potions and pastes, but as far as I can see, by 17th century, more of the “out of the box remedies” like “Vicary Method” (look up with your own disgretion, but it essentially involves somehow transfering the disease to chicken in a rather graphic way, until the person OR the chicken died), seems to have died out of popularity. However, in cases where these measures aren’t enough and bodies are piling up (which unfortunately was often the case with outbreaks), generally the effort was focused on preventative measures rather than treatments. Traditional approaches includes house hold level quartine, routine searches and removal of deceased by council appointed services, smoking of “sweet smelling” herbs to combad the evil sent, banning of public gathering, and cats and dogs were killed (and this we will learn that it may not been just horrible but double further worsen the situation).

How to catch a plague (according to science)

But okay, what REALLY causes the plague, and what do we know of this disease? You might have some vague idea that this has something to do with rats, which is not completely wrong, but the real mechanism is essentially a blood-borne vector disease, which is the pathology lingo to say that it’s a germ-caused illness transmitted through blood. Blood? Well, not necessarily just of humans, but let me draw you a picture, as I heard it on one of my favorite podcasts.  One hungry flea jumps onto a rat for a blood meal. But oh, no, this rat has Yersinia pestis (the real culprit bacteria behind the whole massacre) in it! So this bacterium gets into the flea and multiplies in its tiny stomach. Within 3-9 days, this poor little flea, now hungry again but super queasy from overflowing bacteria in its tummy, will try to take another blood bite from a new rat it landed on and ends up throwing up – rat blood and the bacteria – but now in quantities of 11,000-24,000 Y. pestis. Once back in mammals, this parasite is in a different life cycle phase and will enter the lymphatic system, duplicate until it eventually the infection spreads to the bloodstream, to the liver, spleen, and other organs. This bacteria can infect over 200 species, but their primary hosts’ (ie flea’s) primary host like Ratus ratus (sewer/black rats) tend to have mild resistance.  This may be allowing for asymptomatic carriers (ie immune system keeps the bacterial duplication/symptoms at bay), and with their relatively high replacement rate, it seems like the natural infections are less of a trouble for these rats. (And see? This is yet another reason why we should’ve kept the cats to keep rats at bay!) However, when the infection happens to humans, the story’s different.

In Homo sapiens for example, the diease can manifest (depending on what type you contract as well) in three ways: bubonic, septicemic, pneumonic. In bubonic plague, following the incubation period of between 1-7 days, the infection spreads to the lymph nodes, leading to the infamous bubos forming – the swellings we discussed earlier that doctors observed that are essentially the incubator full of bacteria and pus. (And yes, this is the one that most people probably imagine the plague to look like on a patient.)  With this type, you actually had roughly a 30-60% chance of survival despite the horrendous visual (more on this later). These patients often also experience other symptoms like fever, chills, head and body aches, vomiting, and nausea. Septicemic plague is the version where the bacteria (say those that overflowed from the swelling lymph nodes or a direct flea bite into the bloodstream) enter the bloodstream, resulting in sepsis. Like most sepsis, left untreated, it’s almost certainly lethal, with a mortality of 80% or 90%. And at this stage, as well as the bubos themselves, can result in localized necrosis, where the body tissues usually from the terminal area like fingers, feet, nose, etc, die locally, turning black (hence the name, “Black Death”).   This is nasty enough, but the scariest variation is probably the pneumonic plague. This, unlike bubonic plague, does not form the characteristic swellings. Fundamentally, to contract the two earlier variants, the infected blood needs to go into you either via a flea bite or with lots of contact with buboes. But with pneumonic plague, it can also be contracted as an airborne disease. The infection takes place in the lungs, resulting in infectious respiratory droplets that can also be transmitted directly from human to human. Furthermore, while the pneumonic plague patients are said to be most infectious at the end stage of their symptoms, their incubation period is really short – around 24h -, and without modern medical intervention (ie, antibiotics!), the mortality is 100%.

Time to call the plague doctor in their OG hazmat suit

So let’s say you were a poor soul after hearing this story who was sent back in time to the 17th century. You notice having the early symptoms of chills and fever, and the buboes are starting to form (which gurgled even according to some horrific accounts!). Time to call the doctor, but if they don’t know the actual cause and with no antibiotics at hand, what CAN they do for you? Besides, it’s not like you need a diagnosis when it’s pretty clear what you contracted, and you had such a high chance of dying at this rate. As described in the first section, it’s true that what doctors could do to effectively treat an individual is limited; hence plague doctors where sometime even seen more synonimous to caller of death because by the time they comes around, there is a good chance for you to be diagnosed as too late and you’re left waiting to die. However, for the neighbors and for public record keeping, it was still a useful service for you to be identified and your house to be marked with a white flag that this household has succumbed to the plague. In other words, while these plague doctors are called “doctors,” they functioned perhaps more akin to public health workers (which is also not surprising that this is the “pre-med school era”, and the credentials behind the beaked mask often varied). While you suffer with fever, you hear the lucky news that, in fact, Dr. Rae may be just able to offer a treatment (given that it appears to be bubonic plague), aside from all of the humor restorative bloodletting: to lance the buboes. This allows the “poison” to run out, cauterizing shut the cleared wound, thus sealing and disinfecting. This was a high-risk treatment in itself, but you managed to survive.

But then you start to wonder, this guy literally just let the biohazard out all over, and how does he manage to survive facing patient after patient? Despite all my debunking of plague treatment tactics in the previous section, this is where the plague doctors, especially their attire, might have been on to something. Amongst his attire, the mask may have been the most iconic, but potentially the most uncertain piece of historical origin that’s worn. However, if it was worn as seen in mid-1600s drawings, a crow-like beak extending far from the face was filled with “sweet smelling herbs”, intending to fight off the “bad air”.  Of course, this doesn’t quite work as they presumed, given that miasma theory was not true. A mask of this sort may have been better than no mask just to give some physical filter, but honestly, the herb-based filtering system is probably not enough to filter out the bacteria of the aerosol droplets coming from pneumonic plague patients (ie, NOT the same standard as modern respirators and clinical masks). The cane that was used to inspect you without touching directly may also have given Rae a social distance measure to “keep away people” (presumably other sick-ish people in streets… while the ethics of that is also dubious, but it was tough times, I guess?).  But the real deal is arguable, the REST of the garment. In fact, in Dr. Rae’s time, he may have been pretty upto date in terms of his PPE game given that the first description that fully resembles what we think of as plague doctor costume shows up in the writing of physician to King Louis XIII of France, Charles de Lorme, during the 1619 plague outbreak in Paris. It was announcing his development of a full outfit made of Moroccan goat leather head to toe, including boots, breeches, a long coat, hat, and gloves. The garment was infused with herbs just like the mask (because, of course, miasmas!). Whether the full credit of this now iconic costume should go to Charles de Lorme seems to be subject of debate. However, this leathery suit did one thing right: it prevented flea bites pretty well. So long as you are extra careful with how you handle the taking off of this OG PPE (and don’t breathe in the pneumonic plague patient droplet), you have a pretty functional protection at hand.

A broken clock is right twice a day – nothing more, nothing less –

So it just so happens to be that Dr. Rae unknowingly (though he may have had sufficient faith in his sweet herbs and leather suits) was geared up to protect himself from the actual culprit behind the plague.  Naturally, I found this to be an emblematic tale highlighting the importance of the correctness of the supporting facts and the logic of a theory, which is indeed a crux of modern science and academia. This may sound obvious, but it’s an important reminder to those who end up in a pseudoscientific line of knowledge (which could be any of us!): just because some specific outcome of the belief system happens to work, the supposed mechanism behind it is not automatically correct. Clearly, with the germ theory falsifying the miasma theory, the leather hazmat suit cannot be used as evidence to say that the miasma theory is correct: it’s just not letting the flea bite.  Conflation of partial truth and correctness of the whole theory is perhaps a philosophical one as well, given that it’s sometimes easy, by human nature, to conflate things that are happening and ought to happen.   

But this is also a lesson for pseudoscience skeptic thinkers: just because something was established or mixed in the pseudoscientific rhetoric, the individual practice/claims/results are not automatically entirely false.  And this is a moment that we all need to be honest ourselves – have we previously dismissed practice or ideas just due to the way it was presented?  Of course, this is not to say that we should actively praise every single little kernel of truth mixed in the pseudoscience rhetoric, which may inevitably be overly assigned credibility.  Heck, in fact, the mixing kernel of truth is indeed a tactic a “sciencey writers” can employ as well.  However, if we decide everything is pseudoscientific based on when/who/where/or the context rather than the content, isn’t this attitude in the very nature of pseudoscience, where we are letting our preexisting notions and biases determine our lens to view “truth”?  So instead of praising individual kernels of truth, let’s acknowledge them as what they are; that is correct; but in the same breath we should be able to say: but doesn’t mean the rest is correct because of blank or it’s not tested.  This is an intentional communication that indeed requires more effort, and if done wrongly, it may still give the same dismissive debunking effect, which could spiral pseudoscientific believers into more pseudoscience.  Therefore, let us practice this fine-resolution distinction of science and pseudoscience and use this to PIVOT the conversations, so that we can invite everyone in the conversation to a factual exploration of intellectual curiosity (instead of saying like “medieval doctors had no clues about bacteria (indeed), so they did everything wrong (see the issue here?).”

And after all, it is important to acknowledge the intention behind some of the pseudoscience/outdated knowledge. It’s not always from malicious intent, unlike some disinformation where one can or DOES actually know better, which should be tackled with fury than these plague scenarios. For example, this miasma theory in a large sense can still be seen as an attempt to conceptualize contagious disease – it was a protective and survival instinct justified with a set of logic back then, and rotting smell is probably a bad sign anyway.  Humorism (which is bona fide pseudoscience in modern medicine) was also wrong and largely unscientific, but it was perhaps an attempt to reconsider nutrition and hygiene practices. So they are wrong, but people were trying to survive, and especially when modern scientific investigation tactics and tools were unavailable, I find something beautiful in humanity still managing to land on “tried n true method” with some kernel of truth that inevitably did protect lives, with many missteps along the way which cost lives.  It is a history of H. sapiens grappling for truth for survival. Acknowledge, and then further explore: but now we know more about these pesky diseases, and we even know why some parts were wrong, while why some parts were right!  So keep thinking, keep asking, and keep talking, and don’t be too scared about correcting or being corrected; and let us all appreciate our inner scientists and our desire to just approach the truth.  And of course, don’t forget to wear adequate PPE (maybe not a leather mask and suits in this day and age) when you are a bit under the weather and you want to keep your friends safe.  Let the fresh air in and ventilate; maybe not to clear our miasma, but to circulate air and keep virulent particles away.  And like my favorite podcast always says, “Wash your hands; Ya filthy animals!” 😉

Recommended Listen/Watch:

Amazing podcast series by two scientists: Erin and Erin.  This episode is a major source of the historical and biological information in this article:

https://thispodcastwillkillyou.com/2018/02/10/episode-5-plague-part-1-the-gmoat/

Something shorter and eye-catching? This video will probably give you a big appreciation of all the illnesses our ancestors were often combating and we’re pretty lucky to not have to face them as much or at all! (It can get visually horrific, so please watch with caution.)

https://www.youtube.com/watch?v=6WL5jy2Qa8I

Sound of Science

Author: Maya Lopez (Blog Chief Editor)

Some of you watching a Sci-Fi film may hear dialogue (perhaps especially those poorly written?) and feel like “yeah, that’s Sci-Fi jargon”. These terms may be of some far-future technology that you are certain doesn’t exist, or perhaps they are just some Latin portmanteau that sounds “science-y”. But what feeling do you get when you read this:

Introductory paragraph found in the entry of SCP-1158. Citation: “SCP-1158” by NotoriousMDG, from the SCP Wiki. Source: https://scpwiki.com/scp-1158. Licensed under CC-BY-SA.

It may read as technical instruction, or a heavily descriptive excerpt from something like Wikipedia (except, wait a minute, this plant thing feeds off of… mammal?!)  One might say it has an “academic tone,” and that is definitely what this writing was aiming for.  However, the excerpt is not from an actual scientific source, but a report of the SCP Foundation: “a fictional organization featured in stories created by contributors on the SCP Wiki, a wiki-based collaborative writing project.” This is ultimately a shared fictional universe work where many writers often submit strange to straight-up creepy pasta tales in such a scientific tone. These works are considered to “contain elements” of science fiction and often horror, but it is not pseudoscience because, well, they are published as fiction.  Hence, these writing styles are rather considered “quasi-scientific and academic”, but today I decided to overthink what about these writings that we register as “scientific”, in an attempt to learn how science is perceived. Furthermore, if a fictional writing can sound scientific, what happens if someone masters such an iconic “sound of science” with malicious intent, and what does the modern scientific report even sound like?

Science-y writing features as seen in SCP


Shared universe is essentially a writing system in which multiple writers take a common world setting and explore different stories within it. I guess it’s kind of like one big fandom and all their fanfics, but they are all canon in a sense.  The OG example that regained popularity upon the COVID-19 pandemic is perhaps HP Lovecraft’s Cthulu mythos or Lovecraftian horror (who incidentally took an anti-occultism stance with Houdini back in the day). In terms of the lore, the SCP universe explores the “findings and activities” of a fictional international organization called SCP Foundation. It essentially is portrayed as a sort of private, international scientific research institution/secret society, functioning as the research body against anomalies while acting as a paramilitary intelligence agency. Despite being a private initiative, the Foundation aims to protect the world by capturing and containing “anomalies” that defy the laws of nature, which are referred to as “SCP objects” or “SCPs”. In actuality, these SCPs stem from some sort of photo/concept on the internet (such as an empty Ikea floor to a coffee vending machine) and the writers employ their full Sci-Fi creativity to transform that into either living creatures, artifacts, locations, abstract concepts, or incomprehensible entities with supernatural or unusual properties. Depending on their said properties, it could be dangerous to the surroundings or possibly the entire world; therefore, the motto of the foundation: Secure, Contain, Protect.

Aside from the shared settings, SCP is exceptional in how they have extensive writing guidance on the “reports” to be submitted. Most of their articles are stand-alone articles in the report format called “Special Containment Procedures” of the specific SCP object. Typically, the SCP objects are assigned a unique ID followed by code referred to as “Object Class”. This classification system according to its lore is suppose to reflect how difficult to contain the object, but stylistically, there is a similarity to taxonomical categorization system (ie ​​Linnaean taxonomy) or even Chemical Hazard Classificaions found in SDS sheets (which are, for you non-lab dwellers, a detailed handling procedure for individual chemicals and reagents). Particularly in the latter, the different hazards are not only identified through pictograms for various categories, but can also have further indications of danger levels. In labs, we use these sheets to construct overall risk assessments of any wet-lab (ie, non-computational) experiments. Thereby, the structural mirroring of “Special Containment Procedures” to scientific handling procedures like SDS sheets inherently adds to the “sciency” realism.

Additionally, these containment procedures often come with ”Addenda” (which can be images, research data, interviews, history, or status updates). While you might expect the research data to be the bulk of the body of the writing in an actual scientific report, extensive “supplementary material/information” is nearly unavoidable in modern science. In fact, if you look at an older research publication (for example, even the novel prize-winning human iPSC paper from 2007), they often used to use “(Data not shown)” for less important data that could not fit into the main figures. However, due to increasingly online publication and the data repositories, the data became increasingly accessible and open, perhaps making these supplementaries more ubiquitous and extensive. Personally, the status updates of SCP addenda also remind me of program package manuals, such as those on GitHub. While this may not sound explicitly “natural science” like, it is in fact quite common for a science niche like bioinformatics to develop computational programs, which are maintained and updated on Git pages that accompany the main publication of the methods paper.

Finally, the key stylistic feature of the SCP is perhaps not what is written but rather what isn’t. They utilize black redaction bars and “data expunged” markings to give the readers the impression of sensitive data. While this is not common academic practice, censorship and redaction were not unheard of in some discipline that is inherently more national-risk sensitive area of technologies and science (such as nuclear energy), especially in a historical context. Philosophically, the act of masking information and some data is arguably not helpful in a pure academic sense, given that even negative results in theory should clarify what is not true for the pursuit of truth. However, it is also true that some information (especially those posing a security risk) may need to be censored from individuals without a certain level of accreditation and security clearance. I think this writing style enhances the “authoritativeness” and secretive nature of reports, adding a sense of immersion as if not only these scientific reports are written but “some higher-up” has then further evaluated them before publication and maybe even reassessed, changing what can and can’t be shared.

Down the rabbit hole of science-sounding writing outside of fiction


Of course, I’m not here to pick apart this shared universe entertainment that they are pSEUdo-SCIenTiFIC and bad. In fact, it is very entertaining fiction, and I invite anyone who enjoys a bit of Twilight Zone-like tales to give it a try.  However, understanding that “sciencey” tones can be manufactured regardless of whether the content is rooted in reality, does come with a possibly dangerous use of these languages – especially for things that are not published as outright entertainment. Imagine if such a “sciencey” tone was part of text intended to sell you something; is this just as non-malignant as fiction?

Such was arguably the finding in a 2015 research, where nearly 300 cosmetics ads appeared in notable magazines including Vogue. As briefed in the Scientific American’s podcast, the research ranked each ad on a scale ranging from acceptable to outright lie. Unfortunately, only 18% of key claims of such ads could be “verified” to be true by the scientists, and 23% were outright wrong. However, I was fascinated by the fact that nearly half of the ads were “too vague to even classify”. Obviously, if it’s an outright lie, someone could sue and FDA (in the case of the USA) can take action. However, it is in fact these grey area that keeps such serious charges away. In theory, the Federal Trade Commission and other trade-related organizations could take action if some ads were misleading enough, but I found it fascinating how marketers can aim to cleverly blend a science-y tone with a sales pitch to strategically blur the line between science-based facts and catch-copy.


In fact, this approach of mixing some “sciencey tone” (or some actual scientific fact) and presenting that to a non-science-backed claim (or “story”) seems to be a tactic that’s not limited to sales: it may be just as useful to propagate a desired narrative.  Such example was what I found when I was looking through the articles of Children’s Health Defence. This is the organization we talked about in the context of anti-vaccine (and we had our critical viewing event of their anti-vax film filled with pseudoscientific rhetoric, which we since then signed up for their mailing list because… watching that film required email registration and it allows us to keep eye on next pseudoscience trend that’s up and coming). It is “associated” with the now (in)famous RFK Jr. While many people are probably familiar with them as mis- (or dis-?) information talk point on vaccine – especially after the viral Bernie’s onesies comment, perhaps people are less familiar with how… rigorous, they are with science mis-communication on public health as a whole.

On their website, they have a whole science section dedicated to their “science communication” articles. Honestly, going into this, I was very skeptical of how they might approach science communication based on their anti-vax film tactics. I expected more of an emotional roller coaster and bombardment of all sort of individual testimonies to rile up the audience’s worries and fears, making sure that everyone has something to be concerned about. But I decided to read one of their article anyway, which alleges the dangers of babies facing unexpected “side-effects” like diabetes from antibiotic exposure. The article was written by a frequent writer for CHD – a doctor, who apparently is an “American alternative medicine proponent, osteopathic physician, and Internet business personality… markets largely unproven dietary supplements and medical devices”. Okay, that’s off for an interesting start, but I was more surprised by the way the article was written.  

The article, obviously, does not hide its rather scary main assertion from the get-go, where babies get a higher chance of diabetes DUE TO antibiotic exposure. However, they actually start by sharing a very medically sound definition of things like Type 1 diabetes and autoimmune disease, while hyperlinking to sources like medical webpages. Then, it essentially writes a short review/summary of a science report published in Science, describing a mouse experiment published just a month prior (I mean, are they’re keeping up with new science publications just like PhDs? dedication!). And what surprised me is that this research paper summary section is… actually pretty decent, concisely summarizing the gist of the findings: how antibiotics delivery in a certain prenatal time window results in microbiome disruption, leading to reduced pancreatic β-cell development. This portion is not only a robust summary of a scientific literature but inevitably builds the tone of authority and science-ness (even sharing the fungus’s Latin name!). They similarly then moved on to discuss a pediatric study of diabetes and microbiome in the context of humans.  However, it is the following section that gets slippery. It then runs off to immediately focus on the “side effects of antibiotics” (without, for example, considering why antibiotics are carefully administered or needed in the first place, because… under what circumstances would THAT happen? And aren’t children just being bombarded by these toXiNS everywhere?? (…I am being sarcastic.)) I suppose this is fair, as that can be a focus, but they do something very tricky here. They essentially list a number of possible side effects, mostly linking to relevant, properly peer-reviewed published reports to back their claim on how IT COULD be harmful. However, look further down and THEN they finally list “links to autism risk”, which, unlike side effects listed earlier, is only backed by an article, not from a peer-reviewed source, but some website called MERCOLA that requires email registration to read and which… oh, its the website that the author runs. Honestly, the diabetes risk assertion aside, this is impressive craftsmanship if it were some SCP work: on how well they are “blending away” sources of perhaps less certainty to those more legitimate in the scientific consensus, while also boosting their scientific tone and authority throughout.

Meanwhile, in real science…


So far, we discussed the use of “science-sounding” language and presentation in both fiction and (unfortunately) non-fiction. But now let’s explore the more recent movement in the real scientific writing. Most of us, at some point in our secondary education, may have been taught the rules for academic or scientific writing, such as passive voice, third-person, etc. These are, in fact, some of the specific stylistic guidelines that the SCP writing guide (alongside a strictly defined list of technical words to increase precision in communication) encourages the writers to use this as well. However, such passive voice, particularly in the modern science community, is often seen as overused, and our literary impression of this voice as cold, removed, and overly technical is a shared sentiment amongst academics too. In fact, as some university academic writing guides would clarify, many major publications now ENCOURAGE writing to be in a more active voice. Why? Aside from the tonal impressions, well, because it’s much SIMPLER. Focusing on clearer and concise writing (and I’m still really working on it… trust me) is extremely encouraged in modern science, not only for general readability but because it facilitates researchers to understand each other better across the world. Another explanation of this trend I’ve once heard is scientists reclaiming more of the authorship (both the credit and arguably the responsibility) of the claims we are putting out in the world. We are (and have been for a while, actually) progressing into a field where scientists are using THEIR voice to communicate the science they actually did and how THEY interpret it, rather than the stereotypical “neutral and objective” reporting of “what has been done and was observed”. Ultimately, this may be more accurate as who is to say that observations made are absolute when reader academics should be free to (re-)interpret them based on their expertise.  Evidence is that, but we are also encouraged to critique, reassess, and question to see if we are convinced by it.

Finally, this change in the language of science is not limited to the reports written for fellow academics to read, but also to the wider world. There is an increasing effort by researchers to use “plain English” to proactively communicate the science in a way everyone outside the field can understand (ie, much LESS jargon). This is coming from an increasing interest in reading science from outside of the academic community, and in fact, all the leading researchers of labs in my institute, for example, have such a Plain English summary on their website to explain what their research encompasses. So science, unsurprisingly, is once again evolving – now to be more accessible and more communicative than in the past. And science will probably continue to evolve in the way we communicate because ultimately science should strive to communicate clearly, for it’s the presenting evidence and methods that matter, which should be debated, not the how it sounds. So next time you see an ad or some internet article that sounds so… “sciencey”, try not to jump to the conclusion that this IS science by its tone, and make sure to look into the actual science behind it being discussed. And if this “science” being explained sparked your curiosity, try to read around it; see if there is a consensus or debate even within the academic community, and critically assess for yourself whether you are convinced by the evidence. …Well, unless it’s SCP-2521, also known as ●●|●●●●●|●●|●, and then maybe don’t read (and definitely not write) about it 😉

Nessie – can scientific investigation ever deny cryptid?

Author: Maya Lopez (Blog Chief Editor)

Walking up to the shores of Loch Ness, I saw a body of water that quickly turned black just about 5m away from where I stood. This was not surprising given that this Loch is deep: the deepest point being 230m, housing a volume of water across the entire British Isles. However, the exceptionally low visibility of the water is not only due to the depth but also to its high peat content running from the surrounding land. Noticeably, I didn’t (happen to) see any fish, consistent with the current scientific understanding of the Loch’s biodiversity.  Biodiversity is said to be low due to the low plankton counts (probably due to the low visibility interfering with their photosynthesis). But all these things aside, it was clear that Fort Augustus, the small town of 650 residents, was full of visitors today, here to view the Loch, not necessarily for its serenity and the unique geography, but for its pop culture icon. And if they are lucky, to spot the next sighting of the cryptid known across the world: The Loch Ness Monster.

The iconic long-necked Nessie from the ‘30s:

The Loch Ness, believe it or not, only gained such worldwide attention within this century. The first publication that generated a wide audience was the testimonial of a local hotel-owner, the Mackay couple, on April 15, 1933. It described the sighting of a rolling “beast” or whale-like fish.  Soon after, in August, another report was published describing a sighting of a beast by George Spicer while driving by the Loch. This description was more vivid: the monstrous creature, resembling a prehistoric dinosaur, crawled back into the water. While the myth of the monster in the Loch has existed from ancient times (as far back as the 500s, where some versions of the tale include St. Columba combating a monster from this loch), a lot of the oral tradition included a more vague concept of a monster. However, the articles from the ’30s (and the widely popular movie at the time, King Kong (1933), which featured a long-necked dinosaur-like character, “possibly inspiring” these early sightings) started to capture the audience’s imagination on what this monster may look like. However, arguably what semented this monster’s visual is the most well-known photograph of the said cryptid: “the Surgeon’s photograph”. The iconic image that probably most of you imagine (a long neck, shadowy figure in clear waves of water) was published on April 20th, 1934, in a British newspaper as a submission of Robert Kenneth Wilson, a gynaecologist in London. When I popped into a souvenir shop, I found very few items with this re-print. While I got myself a shortbread pack with the closest resembling photo (who could resist!), THE photo was nowhere to be found. Was this just because of a copyright issue? …or is that photo, say, already known, to be NOT an actual Nessie?

Debunking and scientific investigations (?) of Nessie

For some reason, the lack of the iconic photo-merch bugged me more than the Nessie-less views of the Loch. Truth be told, I had zero expectations to see the cryptid itself, but I was hoping to indulge in historical pop-culture references that enchanted the world for nearly a century! Maybe I should have gone straight to the Loch Ness museum, but the drive this north from the hotel left us with limited time. So, I did some extra reading (online) and found out that my suspicion was right: that famous photo was, in fact, (alongside many pieces of evidence) debunked. “The Surgeon’s photograph”, in fact, was not taken by a surgeon at all. Instead, it was created by M. A. Wetherell, who had previously submitted and was denied the evidence of “Nessie’s footprint” by his employer, the Daily Mail. He planned a revenge in which he crafted the cryptid’s head and neck with wood putty and attached it to a toy submarine. This photo was then taken, which later investigation also revealed to have been cropped to manipulate the impression of size. The picture was then handed to his friend doctor, who then later sold the image to the Daily Mail, resulting in the publication. This (Wetherrell’s) Nessie was apparently sunken and is still possibly somewhere deep in the Loch today.

Many other pieces of evidence were also later debunked upon reinvestigation. The Taylor film from 1938 was found to be a floating object instead of an animal in 1961. Similarly, Perter MacNab’s photograph was analyzed either to be a misinterpreation of two consecutive waves forming a hump like shadow, or an intentional hoax. While much of the investigative efforts seem to have been going to primary dissect each and every notable sighting image, footage, etc (and mostly refuting them), there has also been some genuine “scientific exploration” that took place. The first one, being within a year after the notable siting in 1934,  Edward Mountain commissioned the first large-scale search with 20 men. With binoculars and cameras spread across the Loch, the investigation continued for 5 weeks, and yet no conclusive images were taken, with one film (now lost) may have possibly shown what appeared as a grey seal. By the 60s, the Loch Ness Phenomena Investigation Bureau (LNPIB; later shortened to LNIB), a UK-based society, was fully established to investigate the Loch to identify the Loch Ness Monster or determine the causes of its sighting reports. They launched their first expedition with a whopping $20,000 grant from World Book Encyclopedia to fund a 2-year investigation during the days of May to October, resulting in area coverage of about 80% of the loch surface. Despite their search effort consisting of 1000s of members, including self-funded enthusiasts and successive sonar investigations, once again, nothing conclusive turned up. However, the sonar apparently picked up an “unidentified object” moving at 10 knots per hour, too fast for a typical fish. Such “possible leads from scientific investigation” definitely fueled the enthusiasm, but the advanced technology soon stacked more evidence against the existence of a prehistoric cryptid. In 2018, believe it or not, there was a DNA survey to search for any “unusual species” that could be indicative of an undiscovered cryptid. Surprisingly, no large fish or animal (like a seal) DNA was also found, with the biggest fish being mostly of an eel. This suggests two plausible (?) ideas that 1) there’s a mega eel (somehow never caught) or 2) a large amount of eel DNA traces accumulated from many small eels. Nevertheless, the researcher commented that “we can be fairly sure that there is probably not a giant scaly reptile swimming around in Loch Ness”.

Can Nessie invite you… to Biology?

These examples, while perhaps a killjoy for prehistoric cryptid seekers, are still a fascinating and illustrative example of biology and the science of ecosystems. Another of my favorite estimates suggests that for one of the most popular theories of the Loch Ness monster being an (ancient) humongous reptile to be true, at least 25+ individuals are needed to sustain its species. This “scientifically fleshed out” theory poses several problems where 1) it is estimated that the Loch can only sustain about 17-24 tons of fish (due to the low plankton count we talked about!) which would limit any large carnevours animal over 200-300kg to be up to 10 and 2) If it was a raptilian specie, they will need to breath which should result in much more frequent sighting as the cryptid. Overall, it appears that most scientific consensus of our knowledge of reptiles, ancient dinosaur species, and the Loch’s ecosystem conflicts with the presence of such a cryptid. In fact, the more seriously you consider it, the science says “highly unlikely”.

However, highly unlikely (the best denial science can provide against things that…doesn’t exist) according to peer-reviewed, consensus-driven scientific conclusion, does not stop enthusiasts from asking “what if”. Which I suppose is understandable, and ultimately, they have the freedom to think so. Some hobbyist investigator has claimed that he was able to find more plankton than what people typically assume, claiming that this would equate to the possibility of larger life! (But did he account for the waves/wind that could cause uneven distribution of planktons?)  Falsification spirit in itself is perhaps scientific, but ignoring all the other evidence that points against it is not so much. Personally, I find a lot of these science-based theoretical calculations of how many large cryptids we would need to sustain a species, or the possibility of large eels and waves from the unique geography much more fascinating.  However, then I started to wonder: why do we keep searching for THE THING that the evidence continues to be stacked against its existence? Ultimately, perhaps we (or some of us anyway) want to believe. And this want is so large that we are driven to go back to the drawing board again and again. Or perhaps it’s the romanticism of the unknown itself – to find the thing the elite academics have been saying wrong all this time and proving the underdog right. Or maybe it’s goodwill, and we don’t want to believe that people are lying intentionally. …Or maybe… it’s too expensive to let this story be completely banned at this point.

Either way, it is safe to say that there is no scientific consensus backed reasoning to suspect that a large, prehistoric-looking creature is in Loch Ness. However, I also started to come to terms with the fact that a lot of us are just enjoying the story of it all, and perhaps even the back-and-forth effort of proving and disproving.  Ultimately, this cryptid is arguably loved and needed by the town. Science, unfortunately, is often an expensive affair (especially the more resource-intensive conclusive approach like emptying out the Loch, for example), and a strong public interest is always a key to funding. If I were tasked to investigate when there is no scientific “reason to suspect” it exists, I personally can’t justify myself for putting my money and labor into it. However, while this might be wishful thinking, such tales can be leveraged to spark a more general scientific intrigue, perhaps enticing the cryptid hunters into other biodiversity citizen science projects even before they know it!  Furthermore, science doesn’t always have to be the killjoy denying fun, by encouraging people to challenge the convention (especially if it’s getting outdated). In that spirit, perhaps the High-Tech 2023 90th Anniversary Search is justified (albeit with willing participants). I think I’ll wait to pull out my bionoculars until National Geographic approves a photograph of Nessie (with a bounty of a million pounds) and entrusts the enthusiast to keep a watch for us.  Ultimately, while unintuitive for non-cryptid believers like me, Nessie investigations might just prove itself valuable as a great science public-engagement opportunity to fascinate a wider audience with the latest science investigation methods (often otherwise too technical and niche). Who knows, the kid walking out of the souvenir shop with the Nessie plushy might just gain enough intrigue in investigating the Loch under the lenses, and one day be the next scientist to find a much cooler Nessie in the water of Loch Ness under the microscope 🙂

Fiscal Cakeism

Author: Andreas Kapounek (Treasurer and Sponsorships officer)

There are different ways to increase government revenue (a non-complete list):

  1. Tax more
  2. Grow the economy so the same percentage of tax leads to more revenue
  3. Borrow more

Nevertheless, it often seems like politicians neglect the economic realities that trade-offs between these three streams pose. 

Tax more? You run the threat of depressing economic growth (as you reduce the incentives to start or expand a business). This means you may end up with less revenue than with the previous, lower, tax burden. This becomes clear in the limit: in the extreme, a 100% income tax would remove any economic (!) incentive to pursue a job, likely leading to the loss of most jobs.

Borrow more? If the markets are led to believe that the government may not be as likely to pay back all of this larger amount of debt than they owed previously, they will ask the government to pay larger risk premiums to lend money to the government. In practice this means that the government will have to pay higher yields on government bonds, as perfectly illustrated by the recent hike in German “Bund” borrowing costs in response to the announcement to moderate the German debt brake to increase defense spending.

Paradoxically, the interplay between taxation and borrowing is what stumped some recent British governments: If you promise to cut taxes and keep expenditure the same, people will assume that you must borrow the difference, driving up borrowing costs. At the same time, if you promise to keep taxes the same but spend more, people will equally assume that you will have to borrow the difference.

So having explored the interplay between taxation and growth, we are left with one more way to fund expenditure: government debt. As the famous (well, in Austria at least) Austrian chancellor Bruno Kreisky said: “And when someone asks me what I think about debt, I tell them what I always say: that a few billion more in debt gives me fewer sleepless nights than a few hundred thousand unemployed people!” I believe most people agree with that statement in principle!

So why can more borrowing be bad? Borrowing can have adverse consequences, because it affects anyone in the country with debt. For example, it drives up the cost of mortgage repayments. Furthermore, it increases the cost of future government debt, making it harder (for example) to raise capital for urgent infrastructure repairs when needed. Or as John F. Kennedy (who might be more familiar to many readers of this blog than Bruno Kreisky) said, when advertising for more spending in economically robust times: “the time to repair the roof is when the sun is shining.

All sides of the political landscape seem to appreciate these concepts when convenient but forget about them and selectively moralize these principles when not. At face value, it can be hard to see a necessary connection between economic policy and political philosophy. Believing that more borrowing without the matching growth expectations or unfunded tax cuts drive up the cost of debt (bond yields) is neither right-wing, left-wing, libertarian, capitalist, communist, or centrist: it is the best model of reality we currently have. 

Least understandable about these moralist views on economics is that it seems as if we broadly agree on the goals we pursue in our economies: There is broad agreement that (other things being equal) more wealth is better than less wealth, better living standards are better than worse living standards, and a more equal income distribution is better than a less equal income distribution. I would broadly call these shared goals “good stuff”. Now, of course, there can be fervent arguments about the relative importance of these goals and this may be a legitimate driver of ideological differences. But surely, we should be opposed to any policy reducing all three and support policies improving all three goals.

There is strikingly much less political agreement on the goals pursued through social policy (people legitimately debate whether a more progressive or more conservative set of values is “better”). On contentious social issues such as reproductive rights, gun control, or school uniforms, people who differ in their political views often do not share a common set of policy goals.

But back to fiscal policy. What I would argue for, is a more honest approach to fiscal policy. It is perfectly legitimate to want to increase public services and equally legitimate to want to cut taxes if one has evidence that either measure may contribute to a more prosperous economy – but on this issue we really cannot have our cake and eat it. 

The discussion above has largely focused on taxing and borrowing, but has broadly neglected the size of the economy. To stay with our ill-fitting cake metaphor – perhaps, if we just grow the cake, there will be some to have and some to eat? There is! But growing the economy is hard and getting harder.

As our advanced economies have matured over the second half of the 20th century into the 21st, the technological advances required to support high growth have increased. While it may have been really fruitful to build the Channel Tunnel it would add almost no utility to the European economy to now dig another one. Inventing the iPhone added great value to the economy but “inventing” the iPhone 367 might only marginally grow GDP and living standards. This is also why China is outgrowing the West but is not projected to catch up in per capita terms anytime soon – as they approach the Western level, their growth rates are expected to flatline too. AI might change this logic – but for now, the iron logic of marginal returns is tightening its screws on economic progress. 

We should and can be honest about this – in some way, we can even be proud of this. Never in the history of humankind has there been a better time to be alive than today and it is objectively hard to improve the current situation quickly. 

There is hope: We are not in the dark about our progress. Economists have spent decades putting the concepts so amateurishly (and in an oversimplified way) articulated by me in this blog into formulas. These formulas do not quite have the predictive power of Newtonian physics (Einstein did away with that anyways!), and acting as if they were a perfect description of the world can be dangerous. While the mathematical modelling underlying classical economics is fantastically rigorous, many concepts have not been experimentally validated (but this topic may need to wait for another blog post). Furthermore, in no way should this blog post diminish alternatives to current economic theories when these alternative descriptions of the economy are model based, yield testable predictions, and these predictions turn out to be true when tested: in fact, I would argue one should be agnostic to dogma and ideology, and focused solely on accurately describing reality, making sure to update beliefs when new evidence arises. Progress should be guided by the scientific method and controlled experiments where possible – if someone serves a piece of cake to 1000 participants, which all proceed to have it and eat it, I might have to find new metaphors. Nevertheless, most current models are performing much better than random guessing at forecasting developments in our economy and are our best available tool to shape policy. 

Therefore, we do have means to make educated guesses about which policies may increase “good stuff” and which policies may decrease “good stuff”. Maximizing “good stuff” should guide our fiscal, monetary and economic policy – and nothing else.

Therefore, we should:

  1. Improve and verify our methods to forecast “good stuff” and create policies accordingly
  2. Apply these policies 
  3. Measure the effects and adjust policies accordingly