Friday, July 15, 2022

The Constitution of Medical Knowledge

Patients are often confused by seemingly conflicting findings of studies, or equally good doctors recommending different treatment plans. How are we to decide? Medical science is a process created by a “reality-based community” to help decide such questions. Science isn’t just hypothesis-testing with empirical observation, although that is a big part of it. It is also the consensus of a community of experts. In 1660, scientists led by Isaac Newton formed The Royal Society as the first institution designed to collect, encourage, and evaluate scientific knowledge. They published the first scientific journal in 1665 (which is still in publication). Were they ever wrong? Often! For example, for 250 years everyone wrongly believed Newton’s theory that gravity was a fundamental force of nature. And that is the point – knowledge is fallible and not subject to the personal authority of any one person. But over time, the arc of the universe of scientific knowledge bends towards truth.

There have been many improvements to the system of medical science since the Scientific Revolution. The first peer-reviewed journal was published in 1731. But peer-review as we now know it didn’t begin until the 1970s. The first randomized clinical trial occurred in 1747 (citrus for scurvy), but the rules for running double-blinded randomized clinical trials, and progressive Phase 1-3 trials weren’t systematized until Austin Bradford Hill and Harry Gold in the post-WWII era. Statistics entered medicine in the 1970s. Systematic reviews began in the late 1970s. Evidence-based medicine, as we know it today, was taught in medical schools since the 1980s.

Jonathan Rauch in “The Constitution of Knowledge: A Defense of Truth” describes knowledge as a funnel. At the top are all the guesses, the hypotheses, that drive scientific investigation. This would include (in order of increasing reliability):

5. Much of what is posted on any patient health forum every day: anecdotal “evidence” from patients; YouTube videos posted by Snuffy Myers, Mark Scholz, etc.; lab studies (mouse or test-tube); 

4. observational/epidemiological studies of patients; 

3. retrospective case-controlled studies, and systematic reviews/meta-analyses of them; cohort studies with retrospectively specified variables.

2. cohort studies (people followed from before disease occurrence) with prospectively specified variables; e.g., Mendelian Randomization Study). 

All of them are just hypothesis-generating. Most hypotheses are, and should be, wrong. Science depends on evaluating lots of hypotheses. There is no shame in guessing wrong; the only problems are when guessing stops and when one confuses a guess for a fact.

1. Large, well-done, and confirmed randomized clinical trials are at the bottom of the funnel; they are not just hypothesis-generating, they constitute truth in medical science. These categories were universally agreed upon after looking at which kinds of studies are likely to have conflicting results, and which almost never have conflicting results. All scientists believe in these categories; “pseudoscience” occurs when people claim to be doing science but ignore these categories. 

Here’s a fuller description:


Some institutions regularly GRADE prostate cancer research (NCCN, AUA, ASTRO, ASCO, SUO, EAU, CUA, PCF, and others). The institutional opinions (and not anyone’s personal opinion) are the standard-of-care. Until disproved, they constitute current medical truth. While even the best research doesn’t predict for the individual, one is foolish to ignore our best estimate.

There is no science without consensus by experts - science is a social construct. One can argue that there are and always have been objective truths, but we can only know what is in some way perceivable by humans. Did the Earth always revolve around the sun? Of course. But it did not enter the realm of science until Copernicus hypothesized it (1543), and Galileo (1609), Tycho Brahe (1573), Johannes Kepler (1609), and Isaac Newton (1687) proved it and showed how. That’s when astronomy became a science. There is no science without hypothesis-testing and empirical observers. 

 Loss of Respect for Expertise 

How do we know what is true? None of us has the time or the inclination to test everything for ourselves. We rely on trusted experts to tell us. Few doubt that the heart pumps blood to our lungs and other tissues, although few have seen our hearts do that. We know that William Harvey discovered that fact in 1628, and it is now universally accepted as true and foundational to all cardiology. Even fewer know how the cardiac tissues cause the heart to beat, how arrhythmias are diagnosed, or how plaques can cause heart attacks. We rely on cardiologists to know all that, and within cardiology are sub-specialties (e.g., heart transplant specialists, sports cardiology, electrophysiology, etc.). There are dozens of medical specialties, each with several sub-specialties. There are even specialists in cutting across categories, and assuring that the latest innovations become available to patients; this is called “translational medicine.” In this era of specialization, few know much outside of their specialty, and as patients, we must, at some point, rely on the experts for our knowledge about disease, diagnosis, and treatment. 

Medical science became probabilistic in the 20th Century. All medical institutions agreed that statistics are the only way to reject hypotheses, judge superiority or inferiority, infer causality, and to analyze and reduce errors. Statistics are difficult to understand and are non-intuitive, even for many doctors. As sophisticated statistical techniques were adopted by the medical institutions and their publications, lay people, who did not have their arcane knowledge, were increasingly left out of the truth community. 

The Dunning-Kruger Effect is a cognitive bias on the part of incompetent people overestimating how much they know. In medicine, a little knowledge is a dangerous thing. When I started writing my novel, Thaw’s Hammer, about a killer virus, I thought I knew enough about the subject to write a credible novel. Four years later, I knew how much I didn’t know. I grew to admire the experts who had to understand the biochemistry of the replicative apparatus, the interactions with host cells, and the immune system. Viruses are the most numerous and diverse forms of life on Earth. Anyone who thinks they fully understand them is wrong. The experts differ from lay people in knowing they don’t completely understand them. Still, an expert understands a lot more than any lay person who thinks he knows more. I know enough to reject any advice from a Jenny McCarthy or a Joe Rogan in favor of advice from the CDC. 

Overconfidence in subjective assessments, when contrary to scientific consensus, is also influenced by alignment with political and religious social groups. The Dunning-Kruger Effect is particularly strong on the issues of vaccination (particularly Covid-19 vaccination), genetically modified foods, and homeopathic medicines (see this link).

Fundamentalism in Medicine

Knowledge is progressive and cumulative. Newton said, “If I have seen farther, it is because I have stood on the shoulders of giants.” Opposing this kind of humility, are people who think, based on a few facts or “alternate facts,” that they have arrived at the truth hidden from the rest of us. What they are really doing is inhabiting what Rauch calls an “epistemic (knowledge) bubble.” They are only allowing into their knowledge bubble those data, and persons, that confirm their biases. They take studies out of context and fail to rigorously analyze studies they agree with while finding reasons to disqualify studies that don’t agree with their preconceived notions. They reject the methods of analysis developed by the institutions they reject. They are usually smart and think that they are fully capable of judging the data for themselves. This takes a certain kind of narcissism – as if the whole world is full of “sheeple” and only they know the real truth. They are also lazy – it would be too much work to learn and evaluate the whole body of knowledge. 

Fundamentalism has been around in religion at least since the Protestant Reformation. But it emerges in all other areas of human knowledge – politics (as populism), law (as originalism/ anti-stare decisis), and folk/Internet medicine. It is usually short-lived: the fundamentalists of one generation eventually give way to the acceptance of an orthodoxy and hermeneutics for interpretation of texts. Fundamentalism substitutes personal authority for institutional authority. Personal knowledge is acquired rapidly and doesn’t require input from others. Because personal ego is at stake, it excludes all information that doesn’t confirm. Institutional knowledge, on the other hand, builds on a foundation of knowledge of the “truth community,” and includes conflicting data. The conflicting data create new hypotheses and the opportunity for knowledge to expand. If enough conflicting data accumulate, they may cause what Thomas Kuhn called a “paradigm shift.” 

Google is a wonderful thing. Knowledge is potentially at our fingertips, but information out of context can mislead. Instead of knowledge, we can be left with information that only confirms what we think we know. Social media ideally expose us to all sides of any issue. But if we are not open to all sides, social media can only reinforce the knowledge bubble we have built around our pre-determined beliefs. Without challenges to what we think we know, there is no progress. 

Distrust of Institutions 

There has been growing distrust of institutions among lay people, sometimes with good reasons. There were abuses like “p-hacking” that fostered distrust. Until recently, publications did not require authors to be transparent about potential conflicts of interest. Often, negative findings were not published (the US government now requires all registered clinical trials to publish their findings). Budget cutbacks at the NIH decreased funding for medicines and technologies that did not have profit potential. Mistakes and abuses were publicized in the media and over the Internet. Institutions are valuable not because they don’t make mistakes, but because they correct mistakes and abuses. Retractions and corrections are published. Researchers who lie are found out and excluded from future publication. 

The other threat to truth came from an unlikely source – conspiracy theorists. Before the Internet, they were just isolated “nut jobs.” But social media provided the means for them to find others with enough common beliefs to form a “non-truth” community. On the patient forum, HealthUnlocked, I’ve seen several who point to a supposed Big Pharma/FDA conspiracy. Although they are still a minority, they can have outsize influence by dominating conversations, mixing truth and lies, purveying lies so outrageous that some believe there must be some truth to them, and by blinding the conversation with so much bullshit that reasonable people despair of ever discerning the truth. On Facebook, Twitter and YouTube, bad actors can stage a concerted campaign to “like” and “share” content they want to use as propaganda. They can “troll” serious posts to render the conversation harder to follow. 

Because institutional knowledge was not readily comprehensible to laymen, and because distrust mounted as abuses were well-publicized, the Internet (Dr. Google) became a substitute for expertise. Laymen believed they understood the subjects as well as experts and their institutions, and they were able to find others on social media willing to tell them so. When biases are confirmed by media personalities they become particularly pernicious. We always believe relatable people we know and like (from TV, videos, and podcasts) versus strangers who author incomprehensible studies full of numbers and jargon we don’t understand. This cognitive error is called “the availability heuristic” – it’s why you may believe the claims of someone you know on an Internet forum over high-level statistical evidence. The danger of substituting personal knowledge for institutional knowledge in medicine became apparent with the anti-vaxxer movement. It had always been a fringe group, but in the US, a third of the population did not get vaccinated against Covid-19.

What can be done?

What can be done to restore faith in institutional truth? Rauch sees hope in the measures Facebook took after it came to light that bad actors from Russia manipulated Facebook’s algorithms to change what was seen by Facebook members. Facebook changed its algorithms and created software to eliminate bots. They also labeled and demoted content of dubious veracity. They established an independent oversight board with transparent rules. It reports to and is financed by independent trustees, who can remove its members if they act in bad faith. The board’s decisions are binding on Facebook and anyone who uses Facebook. Its decisions are published. It acts much like an independent court. The problem for a patient health forum like HealthUnlocked is that unless the oversight body is a panel of doctors, they cannot privilege the content of one post over another without risking lawsuits.

The most any patient forum can do is establish rules for civil discourse. I would suggest the following rules and guidelines for anyone posting in a patient forum:

(1) No ad hominem remarks. Ad hominems are remarks that insult the person. “You’re wrong about that and here’s why…” is entirely appropriate. “Jane, you ignorant slut!” is entirely inappropriate. Responses must speak to content, not the supposed intentions of the poster. If you don’t have anything good to say about a person, say nothing. This should eliminate trolling. Trolls thrive on attention and virtue-signaling, so don’t feed the trolls by responding in kind. Alert a moderator immediately. If you feel you have to make personal remarks, do it in private mail.

(2) Members cannot post dangerous or illegal content (e.g., a recipe for a known toxic substance or instructions on how to obtain it). They may post unproven or experimental therapies, and especially their own experience with them. Members are encouraged to identify experimental therapies as experimental. Hypotheses are entirely appropriate and encouraged.

(3) Avoid strawman arguments. Strawman arguments are ones that replace what the poster is actually saying with a false one which is then refuted. If you are starting a reply with the words “"So what you're saying is ... ?" or “Then you must also believe that…” you are probably setting up a strawman. The opposite is a steelman argument, where you restate what the poster said in the strongest form. It shows you are listening and want to resolve the issue.

(4) Avoid sarcasm. Sarcasm doesn’t work on the Internet. It usually only works if one can see your facial expressions and hear it in your voice. There are no sarcasm emojis, and the original poster will probably believe you meant it seriously. Making fun of a person is just a form of ad hominem. Humor is fine, but not as a rhetorical technique.

(5) Be aware that consensus is rare. Patients may get a lot of conflicting advice or anecdotes, and that’s okay. Discuss with a doctor you trust.

(6) Don’t take it personally if someone disagrees with you. Consider the issue as dispassionately as you can. It’s not necessary to reach agreement, just to flesh out the issue from all sides.

(7) Caveat emptor! No one on a patient forum is a doctor, and no one’s advice or personal experience should be taken as definitive. Anecdotes are not evidence. Check everything with your doctor. It is entirely appropriate to ask for source material for advice that goes beyond the standard-of-care, and to discuss those sources with your doctor. But remember that doctors may have little patience for sources that do not come from peer-reviewed journals or are low-level or low-quality evidence (see above).