Here are some of the concepts I've discussed on this blog.
Dolphin belief - an utterance where the apparent propositional, truth-claim part of it is less impotant than the emotional or group signaling content. The person saying a dolphin is not aware that this is what they're doing. Much of what humans say is like this, with the content of the statement just window-dressing for its true purpose. An actual meaningful truth claim is like a fish, while one of these utterances with decorative verbal content looks superfically the same, but is actually something completely different - like a dolphin. (More here.)
Viceroy authorities - there is a spectrum of justification of authority, from people who actually try to derive their authority from making true claims, and others whose principal aim is to manipulate others, regardless of the truth. Those who want to manipulate of course want to seem like (and often believe they are) justified authorities, so they imitate justified authority. Justified correct authorities are like monarch butterflies, and dogmatists aiming at manipulation are like viceroys trying to mimic them. (More here.)
White beast - the opposite of a bête noire. A white beast is a sacred object or event in the history of an identity-forming community - it is a usually a tragedy that has negative consequences for the community down to the modern day. The paradox is that when an outgroup suggests a remedy, even though a demand for justice over this tragedy is central to the group's identity, a remedy ironically threatens that identity, and is met with outrage from the outgroup which puzzles the outgroup. (More here.)
Inherent cyclic crisis - A process by which any dynamic organized entity (living things, organisms, individual humans and their beliefs, human organizations) must have inherent contradictions between the drive to avoid damage and dissolution, and the way they represent aversive stimuli, leading to a distorted model of the external world which results in either paroxysmal suffering or death. (More here.)
The ISE theory of dealing outsiders - humans have only devised three ways of dealing with other humans that do not follow the same moral authority - remaining Ignorant, treating them as Subhumans outside of moral consideration, or Evangelizing (assimilating) them.
The tyranny of territory stops us from having true choice and therefore brings competition between governments for human capital close to zero. Since social organization rests ultimately on the threat of violence, no one has thought of an effective way to have individuals be able to choose which agency they would like involved in which aspect of their life (i.e. I like the Nebraska DMV better than Idaho's, so I'll get my license there.) Rare exceptions exist such as international tax law but this applies to legal entities. The closest solution has been charter cities. A similar argument applies to living in a simulation - the hardware where your experience originates has to reside in physical space somewhere, even if distributed.
A substate of inherent cyclic crisis above: states demonstrate a 200 to 250 year cycle. This is most obvious in the case of China, only because China has a large fertile plain which lends itself to political unification and few neighboring states that can threaten them (with obvious exceptions.) But the same cycle can be seen applying to other states when it has the opportunity to run without interruption.
Population drops off west of the 100th meridian in the U.S. because around the time people reached that point, trains became a practical means of getting all the way to the coast.
When discussing the simulation argument, "simulation" is almost always poorly defined, and in a meaningful way, if you feel pain pleasure or emotion, you are in a simulation. Simulation argument proponents also smuggle in characteristics and motivaitons of the simulators (including the assumption that they exist and have intentions) similar to theology.
We should stop METI, or any attempt to actively alert aliens to our presence.
CLAHSF (pronounced "CLASH EFF") - the Coordinated Labor and Agriculture Hypothesis of State Formation. The nuclei of early civilizations was generally in agriculturally marginal environments (deserts with a river, arid cold plateaus.) In these environments, centrally coordinated agriculture (e.g. planting or harvesting in large numbers based on river flooding) could actually result in population growth, and there was no ability to leave the group and survive outside of the system. Centralized states with more coordination in warfare developed and dominated neighbors. This explains why places like Egypt, the Fertile Crescent, central Mexico or the Andes produced growing civilizations when more productive areas did not. The exception is China, but there, the chosen crop of rice demands similar central coordination. Could be thought of as the counterpart to Scott's Seeing Like a State. (More here.)
The Bad Stripe - in the U.S. there is a zone of low human development stretching from the southeast corner of Pennsylvania, down the Appalachians through West Virginia into Kentucky and Tennessee, where it turns west through Arkansas and into Oklahoma. This corresponds roughly to the areas settled by Border Reavers in Fischer's Albion's Seed, and could be called Greater Appalachia. (More here.)
Sunday, December 26, 2021
Most "Emergent Properties" Are Either Ignorance (or, Once We Understand Them, Merely Shorthand)
Searle's famous example of an emergent property is that a single water molecule is not wet. It's only when there are a lot of them together under certain conditions that they occur in something meaningfully called a liquid.
More generally in this view: emergence occurs when the interaction of multiple entities (often ones outside the direct perceptual abilities of humans, like water molecules), produces a behavior qualitatively different from the single entities, which can be perceived directly (like water.)
This concept has been rightly rejected but it's worth being clear about exactly why it should be rejected, in order to make a general argument against the idea.
Viewed in terms of predicting the behavior at the more complex (usually directly perceived macro) level, from the simpler entities, emergence is only, and always, ignorance. By "ignorance" I mean "an inability to predict that is based on limited knowledge of the observer, rather than a property of the entity that is lacking when the entity exists in isolation, but apparent when the entity interacts with other entities." Water's behavior in aggregate as a liquid is determined by the masses and electrochemical properties of its atoms It is determined by the water molecules, inherent to their properties. If the water molecules are replaced with ammonia or methane, the properties of those atoms and their relationship in ammonia or methane molecules, create a different sort of liquid, with different properties.
Of course, we CAN now predict fairly well from the intermolecular forces of water molecules at various temperatures and pressures (or of multiple molecules) where they will be liquids, and how those liquids will behave. Now that we can predict it, does this mean it's no longer emergent? And for those places (still in the large majority) in chemistry where we cannot yet make the prediction, does that mean reality is fundamentally irrational with no causal association between the simple entity and the aggregate, or we just aren't smart enough to see it yet? Therefore, the superficial countenance of an "emergent" property has nothing to do with the entities themselves, it's just the result of our own provincial limits on cognition that keep us from predicting how they will behave together.
Viewed in reverse (trying to apply the property we think has emerged at the higher complex level to the simple entities), it's clear that "wet" is shorthand for aggregate interactions of myriad small entities. In principle, you could understand (read: predict) the bucket of water at the level of individual molecules, but we use a (in this case, very good) approximation for their behavior which we describe as "wet". This is based entirely on the provincial bounds on human perception, cognition, and the tools we have to observe water molecules. In fact it is the cases where evolution has given us very good cognitive and perceptual approximations that the appearance of emergence is most compelling. But leave the realm of entities and events that our ancestors needed good cognitive/perceptual shorthand for, and the idea starts getting less interesting. I haven't heard anyone saying that quantum entanglement, or Bose-Einstein condensates at near-absolute-zero temperatures are examples of emergent properties, even though they weren't immediately predictable - though they fulfill the conditions for "emergency" seen above, both the simple and complex entities are outside of our meter-second-kilogram realm of experience, and there's no cognitive/perceptual shorthand for them. In the same way, we could talk about a building a Gothic or art deco instead of describing the spatial relationships of every brick (that is to say - a brick is not Gothic or art deco but this is equally "emergent" as wetness from water.)
A charitable interpretation of the traditional idea of emergence then does exist: we directly perceive certain properties like wetness at the macro level in the manifest world. We can actually predict the occurrence of this directly-perceived property based on what we observe at a simpler, smaller level. But the provincial limitations of our eyes and brains do not constitute a dividing line between properties where the universe carves itself at its joints.
This paper by Darley refers to work on cellular automata (of course) and argues that it is not Turing-decidible whether an infinite system would demonstrate emergence, and is "only" NP-hard whether a finite system is. I will go out on a limb and say this is at least in practice a reductio ad absurdum for such a property, and shows at least that ignorance can (formally!) never be ruled out in cases of apparent emergence of this type. What's more, the "emergent" phenomenon (emergent by this definition) is often not really that interesting - for example, the behavior of a three-body system. Does unpredictability really put it on another "level" of understanding? Actually, the opposite - in such a case it is only the individual elements that seem to form a unified entity. If each body is a water molecule, then there is no analogous "wetness" in the holistic full three bodies.
Darley, V. Emergence Phenomena and Complexity.
More generally in this view: emergence occurs when the interaction of multiple entities (often ones outside the direct perceptual abilities of humans, like water molecules), produces a behavior qualitatively different from the single entities, which can be perceived directly (like water.)
This concept has been rightly rejected but it's worth being clear about exactly why it should be rejected, in order to make a general argument against the idea.
Viewed in terms of predicting the behavior at the more complex (usually directly perceived macro) level, from the simpler entities, emergence is only, and always, ignorance. By "ignorance" I mean "an inability to predict that is based on limited knowledge of the observer, rather than a property of the entity that is lacking when the entity exists in isolation, but apparent when the entity interacts with other entities." Water's behavior in aggregate as a liquid is determined by the masses and electrochemical properties of its atoms It is determined by the water molecules, inherent to their properties. If the water molecules are replaced with ammonia or methane, the properties of those atoms and their relationship in ammonia or methane molecules, create a different sort of liquid, with different properties.
Of course, we CAN now predict fairly well from the intermolecular forces of water molecules at various temperatures and pressures (or of multiple molecules) where they will be liquids, and how those liquids will behave. Now that we can predict it, does this mean it's no longer emergent? And for those places (still in the large majority) in chemistry where we cannot yet make the prediction, does that mean reality is fundamentally irrational with no causal association between the simple entity and the aggregate, or we just aren't smart enough to see it yet? Therefore, the superficial countenance of an "emergent" property has nothing to do with the entities themselves, it's just the result of our own provincial limits on cognition that keep us from predicting how they will behave together.
Viewed in reverse (trying to apply the property we think has emerged at the higher complex level to the simple entities), it's clear that "wet" is shorthand for aggregate interactions of myriad small entities. In principle, you could understand (read: predict) the bucket of water at the level of individual molecules, but we use a (in this case, very good) approximation for their behavior which we describe as "wet". This is based entirely on the provincial bounds on human perception, cognition, and the tools we have to observe water molecules. In fact it is the cases where evolution has given us very good cognitive and perceptual approximations that the appearance of emergence is most compelling. But leave the realm of entities and events that our ancestors needed good cognitive/perceptual shorthand for, and the idea starts getting less interesting. I haven't heard anyone saying that quantum entanglement, or Bose-Einstein condensates at near-absolute-zero temperatures are examples of emergent properties, even though they weren't immediately predictable - though they fulfill the conditions for "emergency" seen above, both the simple and complex entities are outside of our meter-second-kilogram realm of experience, and there's no cognitive/perceptual shorthand for them. In the same way, we could talk about a building a Gothic or art deco instead of describing the spatial relationships of every brick (that is to say - a brick is not Gothic or art deco but this is equally "emergent" as wetness from water.)
A charitable interpretation of the traditional idea of emergence then does exist: we directly perceive certain properties like wetness at the macro level in the manifest world. We can actually predict the occurrence of this directly-perceived property based on what we observe at a simpler, smaller level. But the provincial limitations of our eyes and brains do not constitute a dividing line between properties where the universe carves itself at its joints.
This paper by Darley refers to work on cellular automata (of course) and argues that it is not Turing-decidible whether an infinite system would demonstrate emergence, and is "only" NP-hard whether a finite system is. I will go out on a limb and say this is at least in practice a reductio ad absurdum for such a property, and shows at least that ignorance can (formally!) never be ruled out in cases of apparent emergence of this type. What's more, the "emergent" phenomenon (emergent by this definition) is often not really that interesting - for example, the behavior of a three-body system. Does unpredictability really put it on another "level" of understanding? Actually, the opposite - in such a case it is only the individual elements that seem to form a unified entity. If each body is a water molecule, then there is no analogous "wetness" in the holistic full three bodies.
Darley, V. Emergence Phenomena and Complexity.
Labels:
automata,
emergence,
logic,
mathematics,
philosophy,
turing
Saturday, December 25, 2021
Toward a Unified Account of Finding the Truth: Quine and Bayes on Dogma Versus Good Authority
Scott Alexander recently posted a great account (and critique) of reasoning and communication, beginning with criticism of science communication. He then expands this into an argument about how a real Bayesian evaluation of truth claims requires us to sometimes reject "higher standards" of evidence that produce results in conflict with what we think we already know. In so doing he begins to unify every day reasoning within a Bayesian framework. This is valuable, because honest rationalists have noticed gaps between what we consider high-quality evidence, and what we actually use to update - not because we are hypocrites, but because there are other considerations when we want to really get the right answer.
To begin with a critique of simplistic "levels of evidence": even those of us most enthusiastic about peer-review make almost every decision in life without it. You don't insist on a peer-reviewed journal when figuring out how to give yourself administrator rights to a PC you just got for free (a situation I just found myself in.) Your decision process is a combination of evaluations of the cost, time, and likely consequences of whatever you do, along with deciding what sources to trust based on the decisions' specialization beyond daily experience of subject matter, speed of feedback, and possible perverse incentives. In this case, I just watched a Youtube video and it worked like a charm. It's a free computer I'm playing with so I didn't care much if it was ruined by the attempt.
Taking all these things into account is actually Bayesian even if we aren't thinking explicitly about the Bayes equation. But it turns out the model we use in science is actually a special case of Bayesian reasoning - and even victims of dogmatism are using Bayesian reasoning. The second statement is much more controversial than the first; scroll down to that section if you like.
Recasting Popperian Falsification in Bayesian Terms
Karl Popper's model of hypothesis testing is that we can only falsify a hypothesis, we can never be sure it's true. Looking at Anscombe' quartet above, you can see that there are multiple data sets which can produce the same statistics. Stopping after inadequate positive predictions may lead you to the wrong equation. Therefore, the only answer that provides certainty is to falsify a hypothesis.
There are two important aspects to this approach to finding out the truth. The first, and the more underemphasized, is its appreciation of human psychology. Stephen Toulmin noted that the way humans actually reason is to start with premises and conclusion first, and then build a rhetorical bridge between them. If you're a rationalist, you make sure that your rhetorical bridge is not just verbal/emotional sleight of hand but rather an actual argument. The point is that even in Popperian science, we start with a conclusion. The difference is that we explicitly declare, in public, ahead of time, that we're not sure if the conclusion is right (it's less than a thesis - a "hypo" thesis, if you will.) Then you test it.
And, the way you test it is by creating an experience that will give you an unexpected result if it's wrong - to falsify it. The question is, does a hypothesis-supporting experiment (that produces the results we expect) really give us zero information? No, but it usually doesn't give us as much information as a falsification.
A falsification is usually more strongly weighted information - it "moves the needle" much more, because it's more likely to be surprising. Indeed to equate heavier weighting with surprise is almost tautology. So we can place Popperian hypothesis-testing in a Bayesian framework by saying, instead of a false binary of "support equals zero information equals bad" and falsification good", note that ideally an experiment is trying to create that experience which is most likely to produce a surprise, and most efficiently provide information.
Finding Truth in the Real World - Where Other Humans Also Live
But people don't always do this. We live in a complicated world and have had to learn to weight truth claims based on something besides immediate experience. You almost certainly have not done an experiment on biological evolution, and few if any true retrodictive studies.[1] You can't function otherwise, unless you live in a cabin in the wilderness by yourself.
One thing we use in the real world is a web of beliefs, in the Quinean sense. Recasting this in Bayesian terms, no belief is an island, with each belief in the web serving as part of the weighting for your prior. Most beliefs will have many other beliefs bearing on it, updating the belief in question and holding its prediction in place. If you have reasonable confidence in those many other beliefs, then a counterclaim or weird observation just doesn't outweight them. (I am not a statistician, but Elo ratings seem analagous to this. Just because the Packers had an off night and lost to the Detroit Lions doesn't mean you should bet against the Packers going forward - move your needle? Yes, but this is what it means for one loss to be a fluke.)[2] This is why in the linked article, Scott describes how it is right to reject peer-reviewed homeopathy studies - because they are on their face improbable, in large part because of our rich web of other beliefs about how the world works.
Another way we weight beliefs is by giving credence to authority. I think differing confidence in authorities (based partly on our innate cognitive/emotional makeup and partly on life experience) explains the majority of major differences people have in beliefs about the world - not their direct experience of things or reasoning ability. The truth is that people often say "I believe X because an authority I trust said so." In fact, fellow rationalist, you often say that. And again, to function, you must. (Education is a formal example of this. You don't need to do every experiment back to Newton to be a physicist.) Yet during the pandemic, it has become painfully clear that people differ in what authorities they listen to.
Why Dogmatists Are Actually Still Bayesians: Good Authorities and Bad Authorities
I once had the privilege to give a talk to the Sacramento Freethinkers, Atheists and Nonbelievers (SacFAN) about the function of beliefs. At one point I stated that to have good beliefs, you had to pick good authorities. An audience member asked me "How do you define a good authority?" At the time I answered humorously, deflecting the question only because of my discomfort at realizing I didn't have a ready answer.
First let's define authority: a personal source of data (a person or institution) whose truth claims you weight more heavily than others' without first requiring evidence.
We all have our authorities. To function, we must. Sometimes they are formal (academic scientists); more often they are informal (someone you hike with who seems to know the trails and conditions in your part of the world.)
But there are many claimed authorities with poor justification for their beliefs, that promulgate false beliefs, and if we are updating based on what they tell us, we will be wrong too. A current near-canonical example for my likely readers would be a religious leader claiming that mRNA vaccines will kill you. But scientists can be wrong too, and not just because they haven't generated enough data to update their beliefs. Nobel Prize winners going off the rails are something of an institution now, so how do we know to ignore Kary Mullis or Linus Pauling's weirder moments? Not so scary, you say: partly the conflict with the web of beliefs as noted before, and partly because they're speculating outside of their expertise or reproducible experiment. Fair enough.
Returning to the pandemic, the CDC said for at least first month that masks didn't help. (Was this deliberate obfuscation to keep non-medical people from hoarding masks, or an error? Either way, that's what they communicated, and in any event I'm not sure which is worse.) If you still think, despite this, that the CDC is still a better authority than a preacher, why?
Is Someone Who Calls Himself a Rationalist Claiming Dogmatists are Rational?
In an article about people in middle America who refused the vaccine and got severe COVID, I was struck by the following statement (paraphrasing since I can't find the original): "We didn't think it was real, just like everybody else."
When you are surrounded by people who believe X, all of whom (along with you) admire a leader who believes X, and only people who everyone around you has told you are liars are telling you not-X - then, in the absence of (thusfar) immediately contradictory experience, you will continue to believe X. This is the case for COVID victims like those I paraphrased. Given the information they have, they were being good Quineans and good Bayesians.
Early Life Experience and Emotional Makeup Clearly Influence Our Weightings
You might be asking incredulously: is this guy seriously arguing that people following bad authorities are good Bayesians? And (assuming that's true), does that mean people are really in an inescapable hole if they follow the wrong authority, absent any profound contradictory evidence? The respective answers are technically yes, and sometimes no.
To the first point, you might object that many of these people certainly did have experts providing them better information, and they incorrectly underweighted this information, so they were NOT good Bayesians. But there's a problem: large inferential distance. Everyone around them has told them that (Fauci, academics, etc.) are bad, lying, immoral people who are trying to harm you, and should be ignored. With this information, and very little information about how to identity good authorities, from their perspective, if they give credence to Fauci, they have no justification for not also listening to every crank who comes along. Similarly, it's hard to see how someone in North Korea should somehow just know that people in the West really don't have it out for them. These people do not have trapped priors in the sense that that a belief is somehow innately more inert - as if it has a higher specific epistemological gravity - but their priors are pinned down by a dense web of beliefs whose strands are numerous and have strong weightings because they date to early life experience.
To the second question about whether it's hopeless to get people with bad authorities out of their delusion box, as often happens with the rationalist community, in our public discussions we're playing catch-up centuries after salesmen, politicians and religious leaders figured these things out, though in fairness, they had a greater incentive with more immediate feedback loops. The trick is to find someone who they recognize as an authority. Usually this is as simple as finding people providing better information who share with their audience superficial markers of cultural affinity - afiliations with religions or regions, class, dress, language. Yes these people should understand how to select an authority, but the inferential distance is too great. Put them in touch with someone with the same accent instead. Concentrate forces by looking for people who do not demonstrate "authority monoculture." You can also decrease inferential distance by engaging with people who are already having doubts. (Again, not a news flash, but it may help rationalists to understand if I put it in these terms. See what I did there?)
How to Differentiate Dogmatists from Good Authorities
Let's define dogmatists (or charlatans, or whatever other term you like) as someone who wants the benefits of being an authority, but is not interested in the the truth of the beliefs they promulgate, in terms of actions and consequences. Whether or not they genuinely believe they are interested in the truth is irrelevant, and either way, they will certainly claim they are committed to the truth. There's an analogy here to the relationship between tribal team cheers (shibboleths) that appear on the surface to be truth claims, and proper truth claims, like dolphins and fish. Similarly, dogmatists do their best to masquerade[3] as good authorities, but they're really something else entirely - dogmatists might be considered the viceroys to the good authorities' monarchs.
But there are characteristics of good authorities which are "expensive" for dogmatists to maintain. These are:
Seeking good authorities, and preparing to reject one you may have liked when you realize they are just a viceroy, is uncomfortable - it's "software" imposed on the factory settings of humans as we operated in small groups for millennia. People who are constitutionally high in the moral dimensions of loyalty and authority will always find this difficult - the idea of checking their proposed authority runs counter to their nature.
For further reading: more formal syntheses of Quinean and Bayesian models here and here. A useful discussion of a possible conflict here, which the resolution that Bayesian reasoning can be a satisfactory way to contruct a Quinean web without arguing that Bayes is necessarily optimal.
FOOTNOTES
[1] Retrodiction sometimes upsets Popperians, but hypothesis-testing is about making a prediction, based on your hypothesis, about something that the predictor does not yet know, even if it already happened. It is always about the state of knowledge of the claimant when the hypothesis is formulated; it's irrelevant if the events already happened, just that they are not yet known.
[2] The ELO rating used in sports not formally an example of a belief web but it is analogous to one, and in this case behaves similar to one. Just as a single peer-reviewed homeopathy study should not make us throw away the rest of science, a great team somehow losing to a bad one should not make us think the great team is now worse than the bad one.
[3] It is a testament to the success of science that viceroys, especially religious ones in the U.S., increasingly co-opt its language. It is very rare to see this happening in the opposite direction.
To begin with a critique of simplistic "levels of evidence": even those of us most enthusiastic about peer-review make almost every decision in life without it. You don't insist on a peer-reviewed journal when figuring out how to give yourself administrator rights to a PC you just got for free (a situation I just found myself in.) Your decision process is a combination of evaluations of the cost, time, and likely consequences of whatever you do, along with deciding what sources to trust based on the decisions' specialization beyond daily experience of subject matter, speed of feedback, and possible perverse incentives. In this case, I just watched a Youtube video and it worked like a charm. It's a free computer I'm playing with so I didn't care much if it was ruined by the attempt.
Taking all these things into account is actually Bayesian even if we aren't thinking explicitly about the Bayes equation. But it turns out the model we use in science is actually a special case of Bayesian reasoning - and even victims of dogmatism are using Bayesian reasoning. The second statement is much more controversial than the first; scroll down to that section if you like.
Recasting Popperian Falsification in Bayesian Terms
Karl Popper's model of hypothesis testing is that we can only falsify a hypothesis, we can never be sure it's true. Looking at Anscombe' quartet above, you can see that there are multiple data sets which can produce the same statistics. Stopping after inadequate positive predictions may lead you to the wrong equation. Therefore, the only answer that provides certainty is to falsify a hypothesis.
There are two important aspects to this approach to finding out the truth. The first, and the more underemphasized, is its appreciation of human psychology. Stephen Toulmin noted that the way humans actually reason is to start with premises and conclusion first, and then build a rhetorical bridge between them. If you're a rationalist, you make sure that your rhetorical bridge is not just verbal/emotional sleight of hand but rather an actual argument. The point is that even in Popperian science, we start with a conclusion. The difference is that we explicitly declare, in public, ahead of time, that we're not sure if the conclusion is right (it's less than a thesis - a "hypo" thesis, if you will.) Then you test it.
And, the way you test it is by creating an experience that will give you an unexpected result if it's wrong - to falsify it. The question is, does a hypothesis-supporting experiment (that produces the results we expect) really give us zero information? No, but it usually doesn't give us as much information as a falsification.
A falsification is usually more strongly weighted information - it "moves the needle" much more, because it's more likely to be surprising. Indeed to equate heavier weighting with surprise is almost tautology. So we can place Popperian hypothesis-testing in a Bayesian framework by saying, instead of a false binary of "support equals zero information equals bad" and falsification good", note that ideally an experiment is trying to create that experience which is most likely to produce a surprise, and most efficiently provide information.
Finding Truth in the Real World - Where Other Humans Also Live
But people don't always do this. We live in a complicated world and have had to learn to weight truth claims based on something besides immediate experience. You almost certainly have not done an experiment on biological evolution, and few if any true retrodictive studies.[1] You can't function otherwise, unless you live in a cabin in the wilderness by yourself.
One thing we use in the real world is a web of beliefs, in the Quinean sense. Recasting this in Bayesian terms, no belief is an island, with each belief in the web serving as part of the weighting for your prior. Most beliefs will have many other beliefs bearing on it, updating the belief in question and holding its prediction in place. If you have reasonable confidence in those many other beliefs, then a counterclaim or weird observation just doesn't outweight them. (I am not a statistician, but Elo ratings seem analagous to this. Just because the Packers had an off night and lost to the Detroit Lions doesn't mean you should bet against the Packers going forward - move your needle? Yes, but this is what it means for one loss to be a fluke.)[2] This is why in the linked article, Scott describes how it is right to reject peer-reviewed homeopathy studies - because they are on their face improbable, in large part because of our rich web of other beliefs about how the world works.
Another way we weight beliefs is by giving credence to authority. I think differing confidence in authorities (based partly on our innate cognitive/emotional makeup and partly on life experience) explains the majority of major differences people have in beliefs about the world - not their direct experience of things or reasoning ability. The truth is that people often say "I believe X because an authority I trust said so." In fact, fellow rationalist, you often say that. And again, to function, you must. (Education is a formal example of this. You don't need to do every experiment back to Newton to be a physicist.) Yet during the pandemic, it has become painfully clear that people differ in what authorities they listen to.
Why Dogmatists Are Actually Still Bayesians: Good Authorities and Bad Authorities
I once had the privilege to give a talk to the Sacramento Freethinkers, Atheists and Nonbelievers (SacFAN) about the function of beliefs. At one point I stated that to have good beliefs, you had to pick good authorities. An audience member asked me "How do you define a good authority?" At the time I answered humorously, deflecting the question only because of my discomfort at realizing I didn't have a ready answer.
First let's define authority: a personal source of data (a person or institution) whose truth claims you weight more heavily than others' without first requiring evidence.
We all have our authorities. To function, we must. Sometimes they are formal (academic scientists); more often they are informal (someone you hike with who seems to know the trails and conditions in your part of the world.)
But there are many claimed authorities with poor justification for their beliefs, that promulgate false beliefs, and if we are updating based on what they tell us, we will be wrong too. A current near-canonical example for my likely readers would be a religious leader claiming that mRNA vaccines will kill you. But scientists can be wrong too, and not just because they haven't generated enough data to update their beliefs. Nobel Prize winners going off the rails are something of an institution now, so how do we know to ignore Kary Mullis or Linus Pauling's weirder moments? Not so scary, you say: partly the conflict with the web of beliefs as noted before, and partly because they're speculating outside of their expertise or reproducible experiment. Fair enough.
Returning to the pandemic, the CDC said for at least first month that masks didn't help. (Was this deliberate obfuscation to keep non-medical people from hoarding masks, or an error? Either way, that's what they communicated, and in any event I'm not sure which is worse.) If you still think, despite this, that the CDC is still a better authority than a preacher, why?
Is Someone Who Calls Himself a Rationalist Claiming Dogmatists are Rational?
In an article about people in middle America who refused the vaccine and got severe COVID, I was struck by the following statement (paraphrasing since I can't find the original): "We didn't think it was real, just like everybody else."
When you are surrounded by people who believe X, all of whom (along with you) admire a leader who believes X, and only people who everyone around you has told you are liars are telling you not-X - then, in the absence of (thusfar) immediately contradictory experience, you will continue to believe X. This is the case for COVID victims like those I paraphrased. Given the information they have, they were being good Quineans and good Bayesians.
Early Life Experience and Emotional Makeup Clearly Influence Our Weightings
You might be asking incredulously: is this guy seriously arguing that people following bad authorities are good Bayesians? And (assuming that's true), does that mean people are really in an inescapable hole if they follow the wrong authority, absent any profound contradictory evidence? The respective answers are technically yes, and sometimes no.
To the first point, you might object that many of these people certainly did have experts providing them better information, and they incorrectly underweighted this information, so they were NOT good Bayesians. But there's a problem: large inferential distance. Everyone around them has told them that (Fauci, academics, etc.) are bad, lying, immoral people who are trying to harm you, and should be ignored. With this information, and very little information about how to identity good authorities, from their perspective, if they give credence to Fauci, they have no justification for not also listening to every crank who comes along. Similarly, it's hard to see how someone in North Korea should somehow just know that people in the West really don't have it out for them. These people do not have trapped priors in the sense that that a belief is somehow innately more inert - as if it has a higher specific epistemological gravity - but their priors are pinned down by a dense web of beliefs whose strands are numerous and have strong weightings because they date to early life experience.
To the second question about whether it's hopeless to get people with bad authorities out of their delusion box, as often happens with the rationalist community, in our public discussions we're playing catch-up centuries after salesmen, politicians and religious leaders figured these things out, though in fairness, they had a greater incentive with more immediate feedback loops. The trick is to find someone who they recognize as an authority. Usually this is as simple as finding people providing better information who share with their audience superficial markers of cultural affinity - afiliations with religions or regions, class, dress, language. Yes these people should understand how to select an authority, but the inferential distance is too great. Put them in touch with someone with the same accent instead. Concentrate forces by looking for people who do not demonstrate "authority monoculture." You can also decrease inferential distance by engaging with people who are already having doubts. (Again, not a news flash, but it may help rationalists to understand if I put it in these terms. See what I did there?)
How to Differentiate Dogmatists from Good Authorities
Let's define dogmatists (or charlatans, or whatever other term you like) as someone who wants the benefits of being an authority, but is not interested in the the truth of the beliefs they promulgate, in terms of actions and consequences. Whether or not they genuinely believe they are interested in the truth is irrelevant, and either way, they will certainly claim they are committed to the truth. There's an analogy here to the relationship between tribal team cheers (shibboleths) that appear on the surface to be truth claims, and proper truth claims, like dolphins and fish. Similarly, dogmatists do their best to masquerade[3] as good authorities, but they're really something else entirely - dogmatists might be considered the viceroys to the good authorities' monarchs.
But there are characteristics of good authorities which are "expensive" for dogmatists to maintain. These are:
- They have some feedback loop betwen their claims and outcomes (and are interested in it.) Example: physicians and patient outcomes; politicians and legislative impact.
- They do not avoid being tracked by others in their outcomes or predictions.
- They minimally appeal to or rely on those early-life, emotional, identity-overweighted beliefs in their audience's web.
- Their feedback loop is not distorted by perverse interests. Example: TV news pundits trying to get ratings by avoiding predictions that conflict with their audience's values. Paraphrasing Charlie Munger, when you're dealing with a business, you have to understand their business well enough to know their incentives. (More here.)
- They have a way of seeking out "legitimate surprise" (not mere confusion - can also be accomplished with dopamine hacking, i.e. meth.) Example, hypothesis testing in science. (This may be the hardest of these to do consistently due to inherent contradictions within any information-seeking entity.)
- They communicate clearly, and they do not hold up incomprehensibility as a positive. (See: descriptions of John von Neumann, the first of Asher's Seven Sins of Medicine.) Insistence of formal or especially foreign or arcane language is one pernicious form. (See: modern legal language, use of Latin in Church, legal French in late Medieval England.)
- Their claims to authority are limited in scope. Henry Ford had some great industral ideas but also thought people should listen to his hang-ups about cows and horses. Einstein was offered the presidency of Israel but recognizing that his domain of expertise did not extend to politics he said no.
Seeking good authorities, and preparing to reject one you may have liked when you realize they are just a viceroy, is uncomfortable - it's "software" imposed on the factory settings of humans as we operated in small groups for millennia. People who are constitutionally high in the moral dimensions of loyalty and authority will always find this difficult - the idea of checking their proposed authority runs counter to their nature.
For further reading: more formal syntheses of Quinean and Bayesian models here and here. A useful discussion of a possible conflict here, which the resolution that Bayesian reasoning can be a satisfactory way to contruct a Quinean web without arguing that Bayes is necessarily optimal.
FOOTNOTES
[1] Retrodiction sometimes upsets Popperians, but hypothesis-testing is about making a prediction, based on your hypothesis, about something that the predictor does not yet know, even if it already happened. It is always about the state of knowledge of the claimant when the hypothesis is formulated; it's irrelevant if the events already happened, just that they are not yet known.
[2] The ELO rating used in sports not formally an example of a belief web but it is analogous to one, and in this case behaves similar to one. Just as a single peer-reviewed homeopathy study should not make us throw away the rest of science, a great team somehow losing to a bad one should not make us think the great team is now worse than the bad one.
[3] It is a testament to the success of science that viceroys, especially religious ones in the U.S., increasingly co-opt its language. It is very rare to see this happening in the opposite direction.
Labels:
bayes,
bayesian,
epistemology,
logic,
philosophy,
politics,
reason,
rhetoric
Subscribe to:
Posts (Atom)