Saturday, December 25, 2021

Toward a Unified Account of Finding the Truth: Quine and Bayes on Dogma Versus Good Authority

Scott Alexander recently posted a great account (and critique) of reasoning and communication, beginning with criticism of science communication. He then expands this into an argument about how a real Bayesian evaluation of truth claims requires us to sometimes reject "higher standards" of evidence that produce results in conflict with what we think we already know. In so doing he begins to unify every day reasoning within a Bayesian framework. This is valuable, because honest rationalists have noticed gaps between what we consider high-quality evidence, and what we actually use to update - not because we are hypocrites, but because there are other considerations when we want to really get the right answer.

To begin with a critique of simplistic "levels of evidence": even those of us most enthusiastic about peer-review make almost every decision in life without it. You don't insist on a peer-reviewed journal when figuring out how to give yourself administrator rights to a PC you just got for free (a situation I just found myself in.) Your decision process is a combination of evaluations of the cost, time, and likely consequences of whatever you do, along with deciding what sources to trust based on the decisions' specialization beyond daily experience of subject matter, speed of feedback, and possible perverse incentives. In this case, I just watched a Youtube video and it worked like a charm. It's a free computer I'm playing with so I didn't care much if it was ruined by the attempt.

Taking all these things into account is actually Bayesian even if we aren't thinking explicitly about the Bayes equation. But it turns out the model we use in science is actually a special case of Bayesian reasoning - and even victims of dogmatism are using Bayesian reasoning. The second statement is much more controversial than the first; scroll down to that section if you like.


Recasting Popperian Falsification in Bayesian Terms


Karl Popper's model of hypothesis testing is that we can only falsify a hypothesis, we can never be sure it's true. Looking at Anscombe' quartet above, you can see that there are multiple data sets which can produce the same statistics. Stopping after inadequate positive predictions may lead you to the wrong equation. Therefore, the only answer that provides certainty is to falsify a hypothesis.

There are two important aspects to this approach to finding out the truth. The first, and the more underemphasized, is its appreciation of human psychology. Stephen Toulmin noted that the way humans actually reason is to start with premises and conclusion first, and then build a rhetorical bridge between them. If you're a rationalist, you make sure that your rhetorical bridge is not just verbal/emotional sleight of hand but rather an actual argument. The point is that even in Popperian science, we start with a conclusion. The difference is that we explicitly declare, in public, ahead of time, that we're not sure if the conclusion is right (it's less than a thesis - a "hypo" thesis, if you will.) Then you test it.

And, the way you test it is by creating an experience that will give you an unexpected result if it's wrong - to falsify it. The question is, does a hypothesis-supporting experiment (that produces the results we expect) really give us zero information? No, but it usually doesn't give us as much information as a falsification.

A falsification is usually more strongly weighted information - it "moves the needle" much more, because it's more likely to be surprising. Indeed to equate heavier weighting with surprise is almost tautology. So we can place Popperian hypothesis-testing in a Bayesian framework by saying, instead of a false binary of "support equals zero information equals bad" and falsification good", note that ideally an experiment is trying to create that experience which is most likely to produce a surprise, and most efficiently provide information.


Finding Truth in the Real World - Where Other Humans Also Live

But people don't always do this. We live in a complicated world and have had to learn to weight truth claims based on something besides immediate experience. You almost certainly have not done an experiment on biological evolution, and few if any true retrodictive studies.[1] You can't function otherwise, unless you live in a cabin in the wilderness by yourself.


One thing we use in the real world is a web of beliefs, in the Quinean sense. Recasting this in Bayesian terms, no belief is an island, with each belief in the web serving as part of the weighting for your prior. Most beliefs will have many other beliefs bearing on it, updating the belief in question and holding its prediction in place. If you have reasonable confidence in those many other beliefs, then a counterclaim or weird observation just doesn't outweight them. (I am not a statistician, but Elo ratings seem analagous to this. Just because the Packers had an off night and lost to the Detroit Lions doesn't mean you should bet against the Packers going forward - move your needle? Yes, but this is what it means for one loss to be a fluke.)[2] This is why in the linked article, Scott describes how it is right to reject peer-reviewed homeopathy studies - because they are on their face improbable, in large part because of our rich web of other beliefs about how the world works.

Another way we weight beliefs is by giving credence to authority. I think differing confidence in authorities (based partly on our innate cognitive/emotional makeup and partly on life experience) explains the majority of major differences people have in beliefs about the world - not their direct experience of things or reasoning ability. The truth is that people often say "I believe X because an authority I trust said so." In fact, fellow rationalist, you often say that. And again, to function, you must. (Education is a formal example of this. You don't need to do every experiment back to Newton to be a physicist.) Yet during the pandemic, it has become painfully clear that people differ in what authorities they listen to.


Why Dogmatists Are Actually Still Bayesians: Good Authorities and Bad Authorities

I once had the privilege to give a talk to the
Sacramento Freethinkers, Atheists and Nonbelievers (SacFAN) about the function of beliefs. At one point I stated that to have good beliefs, you had to pick good authorities. An audience member asked me "How do you define a good authority?" At the time I answered humorously, deflecting the question only because of my discomfort at realizing I didn't have a ready answer.

First let's define authority: a personal source of data (a person or institution) whose truth claims you weight more heavily than others' without first requiring evidence.

We all have our authorities. To function, we must. Sometimes they are formal (academic scientists); more often they are informal (someone you hike with who seems to know the trails and conditions in your part of the world.)

But there are many claimed authorities with poor justification for their beliefs, that promulgate false beliefs, and if we are updating based on what they tell us, we will be wrong too. A current near-canonical example for my likely readers would be a religious leader claiming that mRNA vaccines will kill you. But scientists can be wrong too, and not just because they haven't generated enough data to update their beliefs. Nobel Prize winners going off the rails are something of an institution now, so how do we know to ignore Kary Mullis or Linus Pauling's weirder moments? Not so scary, you say: partly the conflict with the web of beliefs as noted before, and partly because they're speculating outside of their expertise or reproducible experiment. Fair enough.

Returning to the pandemic, the CDC said for at least first month that masks didn't help. (Was this deliberate obfuscation to keep non-medical people from hoarding masks, or an error? Either way, that's what they communicated, and in any event I'm not sure which is worse.) If you still think, despite this, that the CDC is still a better authority than a preacher, why?


Is Someone Who Calls Himself a Rationalist Claiming Dogmatists are Rational?

In an article about people in middle America who refused the vaccine and got severe COVID, I was struck by the following statement (paraphrasing since I can't find the original): "We didn't think it was real, just like everybody else."

When you are surrounded by people who believe X, all of whom (along with you) admire a leader who believes X, and only people who everyone around you has told you are liars are telling you not-X - then, in the absence of (thusfar) immediately contradictory experience, you will continue to believe X. This is the case for COVID victims like those I paraphrased. Given the information they have, they were being good Quineans and good Bayesians.


Early Life Experience and Emotional Makeup Clearly Influence Our Weightings

You might be asking incredulously: is this guy seriously arguing that people following bad authorities are good Bayesians? And (assuming that's true), does that mean people are really in an inescapable hole if they follow the wrong authority, absent any profound contradictory evidence? The respective answers are technically yes, and sometimes no.

To the first point, you might object that many of these people certainly did have experts providing them better information, and they incorrectly underweighted this information, so they were NOT good Bayesians. But there's a problem: large inferential distance. Everyone around them has told them that (Fauci, academics, etc.) are bad, lying, immoral people who are trying to harm you, and should be ignored. With this information, and very little information about how to identity good authorities, from their perspective, if they give credence to Fauci, they have no justification for not also listening to every crank who comes along. Similarly, it's hard to see how someone in North Korea should somehow just know that people in the West really don't have it out for them. These people do not have trapped priors in the sense that that a belief is somehow innately more inert - as if it has a higher specific epistemological gravity - but their priors are pinned down by a dense web of beliefs whose strands are numerous and have strong weightings because they date to early life experience.

To the second question about whether it's hopeless to get people with bad authorities out of their delusion box, as often happens with the rationalist community, in our public discussions we're playing catch-up centuries after salesmen, politicians and religious leaders figured these things out, though in fairness, they had a greater incentive with more immediate feedback loops. The trick is to find someone who they recognize as an authority. Usually this is as simple as finding people providing better information who share with their audience superficial markers of cultural affinity - afiliations with religions or regions, class, dress, language. Yes these people should understand how to select an authority, but the inferential distance is too great. Put them in touch with someone with the same accent instead. Concentrate forces by looking for people who do not demonstrate "authority monoculture." You can also decrease inferential distance by engaging with people who are already having doubts. (Again, not a news flash, but it may help rationalists to understand if I put it in these terms. See what I did there?)


How to Differentiate Dogmatists from Good Authorities


Let's define dogmatists (or charlatans, or whatever other term you like) as someone who wants the benefits of being an authority, but is not interested in the the truth of the beliefs they promulgate, in terms of actions and consequences. Whether or not they genuinely believe they are interested in the truth is irrelevant, and either way, they will certainly claim they are committed to the truth. There's an analogy here to the relationship between tribal team cheers (shibboleths) that appear on the surface to be truth claims, and proper truth claims, like dolphins and fish. Similarly, dogmatists do their best to masquerade[3] as good authorities, but they're really something else entirely - dogmatists might be considered the viceroys to the good authorities' monarchs.

But there are characteristics of good authorities which are "expensive" for dogmatists to maintain. These are:
  1. They have some feedback loop betwen their claims and outcomes (and are interested in it.) Example: physicians and patient outcomes; politicians and legislative impact.
  2. They do not avoid being tracked by others in their outcomes or predictions.
  3. They minimally appeal to or rely on those early-life, emotional, identity-overweighted beliefs in their audience's web.
  4. Their feedback loop is not distorted by perverse interests. Example: TV news pundits trying to get ratings by avoiding predictions that conflict with their audience's values. Paraphrasing Charlie Munger, when you're dealing with a business, you have to understand their business well enough to know their incentives. (More here.)
  5. They have a way of seeking out "legitimate surprise" (not mere confusion - can also be accomplished with dopamine hacking, i.e. meth.) Example, hypothesis testing in science. (This may be the hardest of these to do consistently due to inherent contradictions within any information-seeking entity.)
  6. They communicate clearly, and they do not hold up incomprehensibility as a positive. (See: descriptions of John von Neumann, the first of Asher's Seven Sins of Medicine.) Insistence of formal or especially foreign or arcane language is one pernicious form. (See: modern legal language, use of Latin in Church, legal French in late Medieval England.)
  7. Their claims to authority are limited in scope. Henry Ford had some great industral ideas but also thought people should listen to his hang-ups about cows and horses. Einstein was offered the presidency of Israel but recognizing that his domain of expertise did not extend to politics he said no.

Seeking good authorities, and preparing to reject one you may have liked when you realize they are just a viceroy, is uncomfortable - it's "software" imposed on the factory settings of humans as we operated in small groups for millennia. People who are constitutionally high in the moral dimensions of loyalty and authority will always find this difficult - the idea of checking their proposed authority runs counter to their nature.


For further reading: more formal syntheses of Quinean and Bayesian models here and here. A useful discussion of a possible conflict here, which the resolution that Bayesian reasoning can be a satisfactory way to contruct a Quinean web without arguing that Bayes is necessarily optimal.


FOOTNOTES

[1] Retrodiction sometimes upsets Popperians, but hypothesis-testing is about making a prediction, based on your hypothesis, about something that the predictor does not yet know, even if it already happened. It is always about the state of knowledge of the claimant when the hypothesis is formulated; it's irrelevant if the events already happened, just that they are not yet known.

[2] The ELO rating used in sports not formally an example of a belief web but it is analogous to one, and in this case behaves similar to one. Just as a single peer-reviewed homeopathy study should not make us throw away the rest of science, a great team somehow losing to a bad one should not make us think the great team is now worse than the bad one.

[3] It is a testament to the success of science that viceroys, especially religious ones in the U.S., increasingly co-opt its language. It is very rare to see this happening in the opposite direction.

No comments: