Sunday, December 26, 2021

List of New Concepts

Here are some of the concepts I've discussed on this blog.

Dolphin belief - an utterance where the apparent propositional, truth-claim part of it is less impotant than the emotional or group signaling content. The person saying a dolphin is not aware that this is what they're doing. Much of what humans say is like this, with the content of the statement just window-dressing for its true purpose. An actual meaningful truth claim is like a fish, while one of these utterances with decorative verbal content looks superfically the same, but is actually something completely different - like a dolphin. (More here.)

Viceroy authorities - there is a spectrum of justification of authority, from people who actually try to derive their authority from making true claims, and others whose principal aim is to manipulate others, regardless of the truth. Those who want to manipulate of course want to seem like (and often believe they are) justified authorities, so they imitate justified authority. Justified correct authorities are like monarch butterflies, and dogmatists aiming at manipulation are like viceroys trying to mimic them. (More here.)

White beast - the opposite of a bĂȘte noire. A white beast is a sacred object or event in the history of an identity-forming community - it is a usually a tragedy that has negative consequences for the community down to the modern day. The paradox is that when an outgroup suggests a remedy, even though a demand for justice over this tragedy is central to the group's identity, a remedy ironically threatens that identity, and is met with outrage from the outgroup which puzzles the outgroup. (More here.)

Inherent cyclic crisis - A process by which any dynamic organized entity (living things, organisms, individual humans and their beliefs, human organizations) must have inherent contradictions between the drive to avoid damage and dissolution, and the way they represent aversive stimuli, leading to a distorted model of the external world which results in either paroxysmal suffering or death. (More here.)

The ISE theory of dealing outsiders - humans have only devised three ways of dealing with other humans that do not follow the same moral authority - remaining Ignorant, treating them as Subhumans outside of moral consideration, or Evangelizing (assimilating) them.

The tyranny of territory stops us from having true choice and therefore brings competition between governments for human capital close to zero. Since social organization rests ultimately on the threat of violence, no one has thought of an effective way to have individuals be able to choose which agency they would like involved in which aspect of their life (i.e. I like the Nebraska DMV better than Idaho's, so I'll get my license there.) Rare exceptions exist such as international tax law but this applies to legal entities. The closest solution has been charter cities. A similar argument applies to living in a simulation - the hardware where your experience originates has to reside in physical space somewhere, even if distributed.

A substate of inherent cyclic crisis above: states demonstrate a 200 to 250 year cycle. This is most obvious in the case of China, only because China has a large fertile plain which lends itself to political unification and few neighboring states that can threaten them (with obvious exceptions.) But the same cycle can be seen applying to other states when it has the opportunity to run without interruption.

Population drops off west of the 100th meridian in the U.S. because around the time people reached that point, trains became a practical means of getting all the way to the coast.

When discussing the simulation argument, "simulation" is almost always poorly defined, and in a meaningful way, if you feel pain pleasure or emotion, you are in a simulation. Simulation argument proponents also smuggle in characteristics and motivaitons of the simulators (including the assumption that they exist and have intentions) similar to theology.

We should stop METI, or any attempt to actively alert aliens to our presence.

CLAHSF (pronounced "CLASH EFF") - the Coordinated Labor and Agriculture Hypothesis of State Formation. The nuclei of early civilizations was generally in agriculturally marginal environments (deserts with a river, arid cold plateaus.) In these environments, centrally coordinated agriculture (e.g. planting or harvesting in large numbers based on river flooding) could actually result in population growth, and there was no ability to leave the group and survive outside of the system. Centralized states with more coordination in warfare developed and dominated neighbors. This explains why places like Egypt, the Fertile Crescent, central Mexico or the Andes produced growing civilizations when more productive areas did not. The exception is China, but there, the chosen crop of rice demands similar central coordination. Could be thought of as the counterpart to Scott's Seeing Like a State. (More here.)

The Bad Stripe - in the U.S. there is a zone of low human development stretching from the southeast corner of Pennsylvania, down the Appalachians through West Virginia into Kentucky and Tennessee, where it turns west through Arkansas and into Oklahoma. This corresponds roughly to the areas settled by Border Reavers in Fischer's Albion's Seed, and could be called Greater Appalachia. (More here.)

Most "Emergent Properties" Are Either Ignorance (or, Once We Understand Them, Merely Shorthand)

Searle's famous example of an emergent property is that a single water molecule is not wet. It's only when there are a lot of them together under certain conditions that they occur in something meaningfully called a liquid.

More generally in this view: emergence occurs when the interaction of multiple entities (often ones outside the direct perceptual abilities of humans, like water molecules), produces a behavior qualitatively different from the single entities, which can be perceived directly (like water.)

This concept has been rightly rejected but it's worth being clear about exactly why it should be rejected, in order to make a general argument against the idea.

Viewed in terms of predicting the behavior at the more complex (usually directly perceived macro) level, from the simpler entities, emergence is only, and always, ignorance. By "ignorance" I mean "an inability to predict that is based on limited knowledge of the observer, rather than a property of the entity that is lacking when the entity exists in isolation, but apparent when the entity interacts with other entities." Water's behavior in aggregate as a liquid is determined by the masses and electrochemical properties of its atoms It is determined by the water molecules, inherent to their properties. If the water molecules are replaced with ammonia or methane, the properties of those atoms and their relationship in ammonia or methane molecules, create a different sort of liquid, with different properties.

Of course, we CAN now predict fairly well from the intermolecular forces of water molecules at various temperatures and pressures (or of multiple molecules) where they will be liquids, and how those liquids will behave. Now that we can predict it, does this mean it's no longer emergent? And for those places (still in the large majority) in chemistry where we cannot yet make the prediction, does that mean reality is fundamentally irrational with no causal association between the simple entity and the aggregate, or we just aren't smart enough to see it yet? Therefore, the superficial countenance of an "emergent" property has nothing to do with the entities themselves, it's just the result of our own provincial limits on cognition that keep us from predicting how they will behave together.

Viewed in reverse (trying to apply the property we think has emerged at the higher complex level to the simple entities), it's clear that "wet" is shorthand for aggregate interactions of myriad small entities. In principle, you could understand (read: predict) the bucket of water at the level of individual molecules, but we use a (in this case, very good) approximation for their behavior which we describe as "wet". This is based entirely on the provincial bounds on human perception, cognition, and the tools we have to observe water molecules. In fact it is the cases where evolution has given us very good cognitive and perceptual approximations that the appearance of emergence is most compelling. But leave the realm of entities and events that our ancestors needed good cognitive/perceptual shorthand for, and the idea starts getting less interesting. I haven't heard anyone saying that quantum entanglement, or Bose-Einstein condensates at near-absolute-zero temperatures are examples of emergent properties, even though they weren't immediately predictable - though they fulfill the conditions for "emergency" seen above, both the simple and complex entities are outside of our meter-second-kilogram realm of experience, and there's no cognitive/perceptual shorthand for them. In the same way, we could talk about a building a Gothic or art deco instead of describing the spatial relationships of every brick (that is to say - a brick is not Gothic or art deco but this is equally "emergent" as wetness from water.)

A charitable interpretation of the traditional idea of emergence then does exist: we directly perceive certain properties like wetness at the macro level in the manifest world. We can actually predict the occurrence of this directly-perceived property based on what we observe at a simpler, smaller level. But the provincial limitations of our eyes and brains do not constitute a dividing line between properties where the universe carves itself at its joints.

This paper by Darley refers to work on cellular automata (of course) and argues that it is not Turing-decidible whether an infinite system would demonstrate emergence, and is "only" NP-hard whether a finite system is. I will go out on a limb and say this is at least in practice a reductio ad absurdum for such a property, and shows at least that ignorance can (formally!) never be ruled out in cases of apparent emergence of this type. What's more, the "emergent" phenomenon (emergent by this definition) is often not really that interesting - for example, the behavior of a three-body system. Does unpredictability really put it on another "level" of understanding? Actually, the opposite - in such a case it is only the individual elements that seem to form a unified entity. If each body is a water molecule, then there is no analogous "wetness" in the holistic full three bodies.

Darley, V. Emergence Phenomena and Complexity.

Saturday, December 25, 2021

Toward a Unified Account of Finding the Truth: Quine and Bayes on Dogma Versus Good Authority

Scott Alexander recently posted a great account (and critique) of reasoning and communication, beginning with criticism of science communication. He then expands this into an argument about how a real Bayesian evaluation of truth claims requires us to sometimes reject "higher standards" of evidence that produce results in conflict with what we think we already know. In so doing he begins to unify every day reasoning within a Bayesian framework. This is valuable, because honest rationalists have noticed gaps between what we consider high-quality evidence, and what we actually use to update - not because we are hypocrites, but because there are other considerations when we want to really get the right answer.

To begin with a critique of simplistic "levels of evidence": even those of us most enthusiastic about peer-review make almost every decision in life without it. You don't insist on a peer-reviewed journal when figuring out how to give yourself administrator rights to a PC you just got for free (a situation I just found myself in.) Your decision process is a combination of evaluations of the cost, time, and likely consequences of whatever you do, along with deciding what sources to trust based on the decisions' specialization beyond daily experience of subject matter, speed of feedback, and possible perverse incentives. In this case, I just watched a Youtube video and it worked like a charm. It's a free computer I'm playing with so I didn't care much if it was ruined by the attempt.

Taking all these things into account is actually Bayesian even if we aren't thinking explicitly about the Bayes equation. But it turns out the model we use in science is actually a special case of Bayesian reasoning - and even victims of dogmatism are using Bayesian reasoning. The second statement is much more controversial than the first; scroll down to that section if you like.


Recasting Popperian Falsification in Bayesian Terms


Karl Popper's model of hypothesis testing is that we can only falsify a hypothesis, we can never be sure it's true. Looking at Anscombe' quartet above, you can see that there are multiple data sets which can produce the same statistics. Stopping after inadequate positive predictions may lead you to the wrong equation. Therefore, the only answer that provides certainty is to falsify a hypothesis.

There are two important aspects to this approach to finding out the truth. The first, and the more underemphasized, is its appreciation of human psychology. Stephen Toulmin noted that the way humans actually reason is to start with premises and conclusion first, and then build a rhetorical bridge between them. If you're a rationalist, you make sure that your rhetorical bridge is not just verbal/emotional sleight of hand but rather an actual argument. The point is that even in Popperian science, we start with a conclusion. The difference is that we explicitly declare, in public, ahead of time, that we're not sure if the conclusion is right (it's less than a thesis - a "hypo" thesis, if you will.) Then you test it.

And, the way you test it is by creating an experience that will give you an unexpected result if it's wrong - to falsify it. The question is, does a hypothesis-supporting experiment (that produces the results we expect) really give us zero information? No, but it usually doesn't give us as much information as a falsification.

A falsification is usually more strongly weighted information - it "moves the needle" much more, because it's more likely to be surprising. Indeed to equate heavier weighting with surprise is almost tautology. So we can place Popperian hypothesis-testing in a Bayesian framework by saying, instead of a false binary of "support equals zero information equals bad" and falsification good", note that ideally an experiment is trying to create that experience which is most likely to produce a surprise, and most efficiently provide information.


Finding Truth in the Real World - Where Other Humans Also Live

But people don't always do this. We live in a complicated world and have had to learn to weight truth claims based on something besides immediate experience. You almost certainly have not done an experiment on biological evolution, and few if any true retrodictive studies.[1] You can't function otherwise, unless you live in a cabin in the wilderness by yourself.


One thing we use in the real world is a web of beliefs, in the Quinean sense. Recasting this in Bayesian terms, no belief is an island, with each belief in the web serving as part of the weighting for your prior. Most beliefs will have many other beliefs bearing on it, updating the belief in question and holding its prediction in place. If you have reasonable confidence in those many other beliefs, then a counterclaim or weird observation just doesn't outweight them. (I am not a statistician, but Elo ratings seem analagous to this. Just because the Packers had an off night and lost to the Detroit Lions doesn't mean you should bet against the Packers going forward - move your needle? Yes, but this is what it means for one loss to be a fluke.)[2] This is why in the linked article, Scott describes how it is right to reject peer-reviewed homeopathy studies - because they are on their face improbable, in large part because of our rich web of other beliefs about how the world works.

Another way we weight beliefs is by giving credence to authority. I think differing confidence in authorities (based partly on our innate cognitive/emotional makeup and partly on life experience) explains the majority of major differences people have in beliefs about the world - not their direct experience of things or reasoning ability. The truth is that people often say "I believe X because an authority I trust said so." In fact, fellow rationalist, you often say that. And again, to function, you must. (Education is a formal example of this. You don't need to do every experiment back to Newton to be a physicist.) Yet during the pandemic, it has become painfully clear that people differ in what authorities they listen to.


Why Dogmatists Are Actually Still Bayesians: Good Authorities and Bad Authorities

I once had the privilege to give a talk to the
Sacramento Freethinkers, Atheists and Nonbelievers (SacFAN) about the function of beliefs. At one point I stated that to have good beliefs, you had to pick good authorities. An audience member asked me "How do you define a good authority?" At the time I answered humorously, deflecting the question only because of my discomfort at realizing I didn't have a ready answer.

First let's define authority: a personal source of data (a person or institution) whose truth claims you weight more heavily than others' without first requiring evidence.

We all have our authorities. To function, we must. Sometimes they are formal (academic scientists); more often they are informal (someone you hike with who seems to know the trails and conditions in your part of the world.)

But there are many claimed authorities with poor justification for their beliefs, that promulgate false beliefs, and if we are updating based on what they tell us, we will be wrong too. A current near-canonical example for my likely readers would be a religious leader claiming that mRNA vaccines will kill you. But scientists can be wrong too, and not just because they haven't generated enough data to update their beliefs. Nobel Prize winners going off the rails are something of an institution now, so how do we know to ignore Kary Mullis or Linus Pauling's weirder moments? Not so scary, you say: partly the conflict with the web of beliefs as noted before, and partly because they're speculating outside of their expertise or reproducible experiment. Fair enough.

Returning to the pandemic, the CDC said for at least first month that masks didn't help. (Was this deliberate obfuscation to keep non-medical people from hoarding masks, or an error? Either way, that's what they communicated, and in any event I'm not sure which is worse.) If you still think, despite this, that the CDC is still a better authority than a preacher, why?


Is Someone Who Calls Himself a Rationalist Claiming Dogmatists are Rational?

In an article about people in middle America who refused the vaccine and got severe COVID, I was struck by the following statement (paraphrasing since I can't find the original): "We didn't think it was real, just like everybody else."

When you are surrounded by people who believe X, all of whom (along with you) admire a leader who believes X, and only people who everyone around you has told you are liars are telling you not-X - then, in the absence of (thusfar) immediately contradictory experience, you will continue to believe X. This is the case for COVID victims like those I paraphrased. Given the information they have, they were being good Quineans and good Bayesians.


Early Life Experience and Emotional Makeup Clearly Influence Our Weightings

You might be asking incredulously: is this guy seriously arguing that people following bad authorities are good Bayesians? And (assuming that's true), does that mean people are really in an inescapable hole if they follow the wrong authority, absent any profound contradictory evidence? The respective answers are technically yes, and sometimes no.

To the first point, you might object that many of these people certainly did have experts providing them better information, and they incorrectly underweighted this information, so they were NOT good Bayesians. But there's a problem: large inferential distance. Everyone around them has told them that (Fauci, academics, etc.) are bad, lying, immoral people who are trying to harm you, and should be ignored. With this information, and very little information about how to identity good authorities, from their perspective, if they give credence to Fauci, they have no justification for not also listening to every crank who comes along. Similarly, it's hard to see how someone in North Korea should somehow just know that people in the West really don't have it out for them. These people do not have trapped priors in the sense that that a belief is somehow innately more inert - as if it has a higher specific epistemological gravity - but their priors are pinned down by a dense web of beliefs whose strands are numerous and have strong weightings because they date to early life experience.

To the second question about whether it's hopeless to get people with bad authorities out of their delusion box, as often happens with the rationalist community, in our public discussions we're playing catch-up centuries after salesmen, politicians and religious leaders figured these things out, though in fairness, they had a greater incentive with more immediate feedback loops. The trick is to find someone who they recognize as an authority. Usually this is as simple as finding people providing better information who share with their audience superficial markers of cultural affinity - afiliations with religions or regions, class, dress, language. Yes these people should understand how to select an authority, but the inferential distance is too great. Put them in touch with someone with the same accent instead. Concentrate forces by looking for people who do not demonstrate "authority monoculture." You can also decrease inferential distance by engaging with people who are already having doubts. (Again, not a news flash, but it may help rationalists to understand if I put it in these terms. See what I did there?)


How to Differentiate Dogmatists from Good Authorities


Let's define dogmatists (or charlatans, or whatever other term you like) as someone who wants the benefits of being an authority, but is not interested in the the truth of the beliefs they promulgate, in terms of actions and consequences. Whether or not they genuinely believe they are interested in the truth is irrelevant, and either way, they will certainly claim they are committed to the truth. There's an analogy here to the relationship between tribal team cheers (shibboleths) that appear on the surface to be truth claims, and proper truth claims, like dolphins and fish. Similarly, dogmatists do their best to masquerade[3] as good authorities, but they're really something else entirely - dogmatists might be considered the viceroys to the good authorities' monarchs.

But there are characteristics of good authorities which are "expensive" for dogmatists to maintain. These are:
  1. They have some feedback loop betwen their claims and outcomes (and are interested in it.) Example: physicians and patient outcomes; politicians and legislative impact.
  2. They do not avoid being tracked by others in their outcomes or predictions.
  3. They minimally appeal to or rely on those early-life, emotional, identity-overweighted beliefs in their audience's web.
  4. Their feedback loop is not distorted by perverse interests. Example: TV news pundits trying to get ratings by avoiding predictions that conflict with their audience's values. Paraphrasing Charlie Munger, when you're dealing with a business, you have to understand their business well enough to know their incentives. (More here.)
  5. They have a way of seeking out "legitimate surprise" (not mere confusion - can also be accomplished with dopamine hacking, i.e. meth.) Example, hypothesis testing in science. (This may be the hardest of these to do consistently due to inherent contradictions within any information-seeking entity.)
  6. They communicate clearly, and they do not hold up incomprehensibility as a positive. (See: descriptions of John von Neumann, the first of Asher's Seven Sins of Medicine.) Insistence of formal or especially foreign or arcane language is one pernicious form. (See: modern legal language, use of Latin in Church, legal French in late Medieval England.)
  7. Their claims to authority are limited in scope. Henry Ford had some great industral ideas but also thought people should listen to his hang-ups about cows and horses. Einstein was offered the presidency of Israel but recognizing that his domain of expertise did not extend to politics he said no.

Seeking good authorities, and preparing to reject one you may have liked when you realize they are just a viceroy, is uncomfortable - it's "software" imposed on the factory settings of humans as we operated in small groups for millennia. People who are constitutionally high in the moral dimensions of loyalty and authority will always find this difficult - the idea of checking their proposed authority runs counter to their nature.


For further reading: more formal syntheses of Quinean and Bayesian models here and here. A useful discussion of a possible conflict here, which the resolution that Bayesian reasoning can be a satisfactory way to contruct a Quinean web without arguing that Bayes is necessarily optimal.


FOOTNOTES

[1] Retrodiction sometimes upsets Popperians, but hypothesis-testing is about making a prediction, based on your hypothesis, about something that the predictor does not yet know, even if it already happened. It is always about the state of knowledge of the claimant when the hypothesis is formulated; it's irrelevant if the events already happened, just that they are not yet known.

[2] The ELO rating used in sports not formally an example of a belief web but it is analogous to one, and in this case behaves similar to one. Just as a single peer-reviewed homeopathy study should not make us throw away the rest of science, a great team somehow losing to a bad one should not make us think the great team is now worse than the bad one.

[3] It is a testament to the success of science that viceroys, especially religious ones in the U.S., increasingly co-opt its language. It is very rare to see this happening in the opposite direction.

Monday, April 5, 2021

Lists of "Bizarre Beliefs" Reveal Difference Between True Belief, and Tribal Team Cheers

tl;dr Many truth claims - beliefs - are actually just tribal team cheers, or emotional signals, with the propositional verbal content merely superficial. We can get confused and react to these as if they are truth claims, especially because they people saying these things insist that they are. We need a name to distinguish them from real propositions - let's call them dolphin beliefs, because of their superficial similarity to true, propositional "fish" beliefs.


You should read Aaron Bergman's review of Fantasyland, a book about American's relationship with conspiracy and magical thinking, today and over the decades. He cites surveys which show, for example, that one in nine Americans believe they have seen the devil driven out of someone. Others he cites are about Obama being born in Kenya, vaccines causing autism, and ghosts. Recognizing that no one is immune to irrational beliefs, Bergman identifies what he thinks are his most "fringe" beliefs. And here I also engage in this exercise, not because I think you're particularly concerned with my fringe beliefs, but because it's interesting to see the differences in his and my list, vs the kinds of things discussed in a book about American conspiracy thinking.*

A few of my own bizarre beliefs:
There are two characeristics to note about these fringe beliefs - one of which Bergman and I share with the conspiracy-believers cited in the book, and one which I think we do not.
  1. We are not good at knowing what will seem strange to others.

  2. These beliefs are not central to identity.
I think if you asked the devil-drivers to name their fringe beliefs, they would (in keeping with #1 above) not necessarily realize that devil-driving is seen by many others as a strange, fringe belief. Similarly, when voluntarily producing a list like this, I probably haven't been able to identify the beliefs I hold that would most shock most readers. This occurs because we're all embedded in communities. Devil-drivers know a lot of other devil-drivers; the belief doesn't seem strange in that context.

As for #2 - I can't speak for Bergman but I know that, if I encounter a strong argument against panpsychism, or data from a probe in the Venus clouds showing a completely mundane abiotic process that produces phosphine, not only would I probably change my mind - I would not become hostile and defensive, as if I were being personally attacked. Resistant, disappointed, a bit embarrassed to have been wrong in public - sure. But not angry. Whereas I think if you were to engage a devil driver and explain why their belief may be wrong, I predict they would become hostile and defensive, as if they were being personally attacked. Same for antivaxxers and birthers.

This underlines the core difference in two types of beliefs. There are actual hypotheses - what a belief ideally always is, able to be updated by new information – and then there are the tribal team cheers of religion, politics, or conspiracy communities.

If we think of beliefs as a good materialist should, we think about what is actually going on in the nervous systems, and how the behavior of the organism differs systematically in a way that can be categorized or at least placed on a spectrum. Notice that it's not merely isolated "trapped priors" we're dealing with here - antivaxxers and devil drivers don't just calmly reject arguments and information and continue to believe what they already believed. There is community, identity, and emotion involved.

I therefore think we should consider whether the "beliefs" of devil-drivers and antivaxxers are truth claims at all, or something else.** At the very least we should consider whether their utility is more as tribal team cheers than as truth claims.

The implication here is that the superficial content of the belief is not the only determinant of whether it is a functional, updateable belief (a hypothesis) or a tribal team cheer. For example: say I learn that there is going to be a meeting of a local club to discuss the phosphine and unknown absorbers in the Venusian atmosphere. Excited to talk about it with like-minded people, I attend. At the meeting I find people talking about how they just know in their hearts there is life on Venus, that NASA is trying to hide the evidence, and that they don't care what additional evidence the probes might find. In fact when I suggest we send more probes they are actively hostile!*** Whereas the club members and I would both say "There is life in the Venusian atmosphere", I have a hypothesis, they have a tribal team cheer, though the superficially the content of the claim is the same. (The hypothesis IS just the content of the claim; the tribal team cheer is a cake of social behaviors with the words of the truth claim as icing.)

In fact, focusing on the process of belief, rather than the content of the belief itself, is what we do in psychiatry. If someone is convinced his wife is cheating on him with absolutely zero evidence, even if she confides "actually I did have a drunken one-night stand ten years ago but he doesn't know about it" - that's still a jealous delusion. He doesn't have a good reason to believe it. The Venus club's stated belief is a community and identity device, not a cognitive tool for explaining the world. Hence bizarre statements, in the rare occasion when they are discussing it with people from outside their community, like "I just feel that it's true", "this is offensive", and "this is a personal attack."

Because it's easy to be confused by tribal team cheers which do indeed look like truth claims, especially when the tribal team cheer-ers are loathe to admit that it's not really a truth claim, it's worth identifying the tribal cheers as something different from hypotheses.

You're probably familiar with the idea of a shibboleth. For me, the belief in Venusian life is a hypothesis; for the club, it's a shibboleth - or at least, much more of a shibboleth than a hypothesis. The more of these characteristics it has, the more likely a belief is a shibboleth than a hypothesis:
  • Avoidance of any testing
  • Anger at questions, as if somehow being personally attacked
  • Formation of identity around the belief
  • Reason for belief is emotional
  • Association with community around the belief

Devil-driving, birtherism, and antivaxxer-ism are shibboleths. Panpsychism is a hypothesis. In the future of epistemology, people may be amused but charitable that we did not make this distinction, just as we think of people five centuries ago who didn’t understand that dolphins are not fish. For that reason instead of calling these types of beliefs shibboleths and hypotheses, let's call them dolphins and fish respecitvely, to emphasize their superficial similarity, and because many dolphin beliefs are actually not in-group team cheers, they're just used by individuals to send emotional signals.


*It's worth pointing out that the types of beliefs we articulate, when asked what our most surprising beliefs are, are generally about the external world, not internal beliefs like "I'm unlovable" or "I can't accomplish important things" - even if we're frequently aware of such beliefs, we guard them closely. I think this is more likely out of fear of the impact on others' opinions of us, rather than a shrewd calculation about what people want to hear about.

**At one point there was a debate in psychiatry as to whether delusions are really beliefs. My argument is that they are indeed something neurologically and behaviorally different, though this is an academic or semantic distinction at this point.

***Compare to eg creationists, who often spend much more time talking about how their enemies are suppressing them than providing actual arguments and data, making predictions or trying to do something pragmatic and useful with their "theory". Where are the creationist biomedical companies?

Friday, March 12, 2021

Patterns of Misperception Between Chapman's Stages of Moral Competence

David Chapman has developed a very interesting framework, elaborating moral psychology developed by Kegan and Kohlberg, to address the differing ability of humans to co-exist in groups at varying levels of complexity, through their ways of finding meaning. This can be seen in their moral competence, a phrase which Chapman often uses.

If you're not already familiar with Chapman's stage, I suggest you first visit his pages, then return here for this additional detail. Part of the interest is that the stages correlate fairly well with modes of civilization over time, including states, religions, and art.

The theory, if that's the right term, is a rich one in that it rewards interrogation with further insights. For example - yes, people do vary in their achievable levels of competence, an uncomfortable realization which Chapman emphasizes less than Kegan.


Another observation Chapman makes is that to a person at level X, level X+1 is indistinguishable from level X-1. (The following will make no sense if you haven't read his work, so if you haven't, please do.) Let's call this misperception pattern A, or "they're all the same."

Example A-1: to a communal level 3 person who cannot function at level 4, the institutional-minded level 4 boss who fires her for constantly missing work due to family obligations just seems like a level 2 psychopath. She can't tell the difference from her stage of moral competence. (Concrete example: think immigrant to Western country living with their family, or J.D. Vance's hillbillies, working for a large corporation.)

Example A-2: to a level 4 institutionalist, the level 5 person just seems like a tribalist/communitarian. Think of that same corporate manager, watching with frustration as their kids participate in a gig economy, maybe program part-time, live in a co-op, have polyamorous relationships - to the corporate manager, this is responsibility-shirking, sloppy living just like the hillbillies.


I'd like to propose another pattern: People at level X can function superficially at level X+2. This is misperception pattern B, or "superficial skipping levels."

Example B-1: a level 1 person (who is dependent on others and cannot even provide the basics of their life) survive in level 3 settings, but are not net contributors and do not truly find meaning through their family or communal settings. Children are level one briefly as infants and toddlers; disabled people may be level 1 throughout their lives.

Example B-2: a level 2 psychopath (my term) can, for a time, survive in an institutional (level 4) setting. Their mechanical transactionalism superficially is a good fit for the rule-based world of the institution. However, in a good (high-functioning, rational, mission-driven) institution, their behavior is not sustainable. (Unfortunately for many of us it is not hard to imagine this. See here for the emergent behavior of institutions - they are neither constellations of individuals, nor collectives.)

Example B-3: a level 3 communalist can seem to fit into a level 5 setting. Ultimately they will find the shifting modes of meaning incomprehensible and frustrating, and split off into an actual communal splinter from the level 5's around them, or return to their level 3 community of origin. Think of the level 5 son. Hey may have met someone who he thought would be an interesting person to start an intentional community with, a guy from Guatemala playing the guitar in a park, and invited him to come out to the desert for a while. The guy tries, but finds it all very weird, and would just rather be with his family.


For an overview of Chapman's stages, start here.

Sunday, February 14, 2021

Nkondi - Fetish Dolls

I've long been obsessed with these intimidating objects and I'm not surprised to learn that the design for Pinhead in Hellraiser was strongly influencd by them. They are reservoirs for aggressive spirits meant to defend against would-be harm-doers. A nail is pounded in every time you want to wake up the spirit.











Newer Posts Older Posts Home