Sunday, September 29, 2019

Complex Dynamic Systems Like Cells, Humans, and Nations Cannot Avoid Cycles of Paroxysmal Disillusionment and Suffering

The following principles apply to any dynamic complex system, including organisms, individual people, or organizations - corporations, nations, or religions. They demonstrate that distortions will inevitably accumulate in the behavior of such a system, causing paroxysmal shocks and suffering.

Let's define a system as a set of components discrete from the rest of the universe. Let's define dynamic complex systems as ones that are high-information relative to the rest of the universe around them, high enough that in the absence of opposing forces, entropy passively favors irreversible changes happening to the arrangement of the system's elements which make it no longer discrete.[1] If such a set of components that exists now is going to exist more-or-less unchanged in the future, it has to perform actions[2] - it has to be dynamic - which reduce its own entropy at the expense of the rest of the universe.[3] The rest of the universe may include similar systems.[4] (Going forward, when I say "system" I mean "complex dynamic system.")

The seven principles are outlined below, then described in more detail.

  1. A system must be mostly concerned with its own perpetuation, or it will not persist. "If a complex dynamic system has been around for a while, it's designed to last and expending energy to do so."

  2. It must do this by reflecting - reacting to - aspects of the rest of the universe. Aspects of the rest of the universe important enough to make the system tend to develop ways to change its state in response to them are called stimuli. An reverse but familiar statement of this principle: "That which gets measured, gets addressed."

  3. A measurement always contains less information, and is therefore not a full or fully accurate representation, of the thing being measured. Perception cannot exactly be reality; "this is not a pipe."

  4. Over time, the focus on self-perpetuation leads a system to become concerned with itself to the point of minimizing the importance of or responses to aversive stimuli to avoid altering its state (which is also aversive.) "Everything that gets measured, eventually gets gamed."

  5. The system's responses become increasingly un-moored from the external world, favoring its own perpetuation over other functions, and/or having a severely distorted model of the world and reaction to stimuli. "Eventually, everything becomes a racket [and/or gets delusional]."

  6. The distortions accumulate until a sufficiently destructive stimulus occurs (a shock), which either reorients the system (usually with severe aversive stimuli), or destroys the system. If the system survives, such shocks will happen repeatedly, but necessarily unpredictably. This is called inherent cyclic crisis.

  7. This is inherent to any self-perpetuating dynamic complex system, and because these shocks are correctly perceived as worsening survival, they cause inevitable suffering. We can call this the Final Noble Truth, a vague parallel to the Buddhist First Noble Truth.

Following is an expansion on each principle.

Principle 1: "If a complex dynamic system has been around for a while, it's designed to last and expending energy to do so." If a system is going to continue existing, a top priority on self-preservation is mandatory, and self-preservation must be the primary influence on perception by and reactions of the system. Since Darwin, thinking of organisms in this way is not revolutionary. But the same principle applies to any other complex system, including human organizations. Corporations have a relatively clear function in this way (they can't keep making money for shareholders if they don't keep existing) but it's more surprising for most of us to think of religions, countries, or volunteer organizations in this way. The converse: if a complex dynamic system is not expending energy on its self-perpetuation, it will not exist for long. (Many apparent mysteries, like the transparently weak business plans of many a Web 1.0 company, are resolved on realizing that they are not in equilibrium and will perish quickly. And indeed they did.)

Principal 2: "What gets measured, gets addressed." This seems obvious enough, especially to those of us at all interested in organizational dynamics, but principal 1 directs the kinds of things that have to be measured if the system will persist, and principal 2 says that there is always some difference between the measurement, and the totality of the outside world; that is, there is always going to be at least some important information missed, and what is missed cannot be acted upon. Concretely speaking, genes reflect the outside world by establishing sensor networks that interact across the inflection point (the cell membrane, or in the case of multicellular organisms, the body.) Some sensor networks have become very rapid and fine-grained reflections of the outside world - xenobiotic metabolism enzymes (which have reacted in only millennia, and the genetics differ considerably even between groups of humans), adaptive immune systems (which also differ between groups of humans and react in minutes), and of course nervous systems are the paragon.[5] But all of them make sacrifices and do not (cannot) sample all of the possible information available. It should be pointed out that living things do not have to constantly repeat the mantra "what gets measured gets addressed" because that's how they're already built and behave, automatically and obligatorily, as was the case for eons' worth of their ancestors. This is not the case with human organizations, which are new developments in nature, and may not be in equilibrium - so those organizations that you notice do not measure important (survival-supporting) data are unlikely to exist for long either. Whatever corporations or their descendants exist in a million years, it won't be ones that didn't respond to relevant metrics.

Principle 3: Perception cannot exactly be reality; "this is not a pipe." Representations - measurements - are never the same as the things themselves, and incentives are never aligned perfectly with desired outcomes (almost trivial; otherwise, perfect alignment means identity, that is, the incentives are the desired outcomes.) There must always be a limit on information collected, and inferences are not always correct. There is infinite information a system could in theory collect about the universe (looking for correlations between each datum or set of data), but the system is more likely to perpetuate itself the more the information it collects, and the more impactful the information it collects. It is this design choice by the system to sense survival-relevant data that turns one of infinite facts about the world around the system into a stimulus. Obviously, which things it chooses takes as stimuli - what it measures - are relevant. (Not to mention if the system is in competition, especially with others using the same resources, there are time and resource limitations on the system on how much data to collect before altering its state.) The implication is that there is a limited set of information collected out of all possible information - what the system receives as stimuli - and these stimuli are necessarily very heavily biased toward self-perpetuation.[6]

Principal 4: "Everything that gets measured, eventually gets gamed." This is similarly familiar, and here is where the tension is set up. Systems must perceive (measure) and react to their environment. Their measurements are not the same as the thing in the environment, only a reflection. Because of this, systems react to the measurement - the perception - not the thing that is being measured. This is not a trivial difference. Anyone who has worked at a large corporation or applied to professional school is familiar with this, and we all know examples where an endpoint was achieved in an only-technical, meaningless way that did not advance toward the real-world goal outside the organization that the endpoint was meant to incentivize. To "follow the letter but not the spirit" is an aphorism which expresses this. Case in point: many companies have sexual harassment or racial sensitivity training. These often take the form of instructional videos with quizzes after them. Most people skip and fast-forward through the videos as fast as possible to register as if they watched the whole thing, often having two different types of browsers open so when they get to the quiz they can go back if the answers aren't obvious. Of course this raises the question of whether there are some types of training where the written tests to get the credential have nothing to do with performing the actual work. For example, in the early-to-mid twentieth century, people became scientists because they liked being in labs, were good at organizing experiments, and in general got immediate feedback directly from their work, and therefore performed better, and therefore were recognized for it by peers and superiors, moreso than is the case now.[7] Do the best GRE scores (and administrative maneuvering, and recommendations, and tolerance of modern graduate school politics) really correlate with the best scientists? Or, does the same process produced skilled and caring physicians? For examples in the individual human, take your pick of any of a host of brain receptor-tricking molecules like opioids or alcohol, as well as immature psychological defenses like denial. Cancer is another example. Unlike infection or physical injury, cancer doesn't hurt until it's about to kill you, thanks to both the earlier reproduction of our ancestors relative to the later onset of cancer, along with the black swan of radiation and brand-new-to-nature chemicals.[5][8]

Another type of distortion has to do with the structure of the system, which effects the way it behaves, rather than perceptions per se. It's long been noticed that corporations become less "nimble" (responsive to market change; ie the relevant universe outside the corporation) as their surface area-to-volume ratio goes down.[9] The higher the surface area-to-volume ratio, the more information that can be collected and the more effective responses can be. Think of bloated giants like big automakers or old engineering companies, where in Dilbert-like fashion people think more about maneuvering in their jobs, coordinating with other departments within the company ("transfer pricing") or competing with other people within the company than they do about their outside-the-company competitors or serving the market. This certainly occurs within states as well, where to various extents for Chinese dynasties, the Roman Empire, and the Spanish colonial empire their downfall was more the result of special interest maneuvering and other intrigues directing attention consistent inward to the court - because what could be going on that's more important outside the palace walls where the barbarians are than inside where the power is? So we have Zheng He's fleet being recalled, Roman patricians scheming in the absence of a succession rule to get legions on your side, or Spanish royalty neglecting overseas possessions until the British and their offspring eat your lunch.[10] There's a final type of distortion which arises from the way that nervous systems save time and money: the more some stimulus-response pairing occurs, the less reward-sensitive it becomes. It moves from being a conscious act requiring effort and concentration, to a habit, to (in biology, programmed through evolution) a stereotyped movement, then a reflex. Once a stimulus-response pairing has started moving down the line it is almost impossible to move it back other than by over-writing it with another stimulus-response pairing.

Principle 5: "Eventually, everything becomes a racket [and/or gets delusional]." This quote is attributed to the late George Carlin. An occult paraphrasing of this in more specifically politics terms is "The state is primarily concerned with itself." Here we can see the full flowering of the problems buried in the earlier principles. There is a constant tension between the need for negative feedback, which the system avoids - that's what aversive stimuli are for - but because the measurement and the thing measured are not the same, the metric is game-able, and the system avoids these stimuli increasingly by gaming them rather than taking real action. (And simultaneously makes un-helpful-to-survival end-runs to pursue positive stimuli.) Consequently the stimulus-response arc gets more and more distorted with respect to the actual longer-term perpetuation of the system. This seems paradoxical in light of #1 above, but because systems are never perfect reflections of the universe around them, they necessarily always react based on at least limited information (especially with respect to long-term consequences) and sometimes with outright distorted information their machinery is feeding them. The necessary self-focus means that these distortions will tend to be in favor of pursuing pleasure; of avoiding pain and believing everything is alright when it is not, and over arbitrary time whatever non-self-perpetuating parts of a system's "purpose" previously existed, will atrophy, and its behavior will become more distorted in favor of comfort and perceived survival over actual survival.[11]

The distortions come not just from "gaming" pain. Organisms can hack themselves to fire off their reward centers without an actual survival enhancement, for example, with heroin, masturbation, or consuming resources they are driven by prior scarcity to consume as frequently as possible, but which have become "cheap" to the point where their over-consumption causes problems, e.g. in humans, fat, salt, and sugar. Opioids are in humans the thing closest to the artificial intelligence problem of "wireheading" where a self-modifying agent given a task can self-modify to be satisfied even though the task is completed.[12] Good examples of rackets are religions and charities that depart from their stated mission in favor of wealth-accumulation and self-perpetuation. (See Givewell's list of charities which maximize their mission rather than their income.) Profit-seeking entities whose products or services intrude into "sacred" (i.e. non-transactional) realms (best example: healthcare) often find that self-perpetuation wins out over their claimed mission. Organizations and individuals can also become delusional - humans are incorrigibly overoptimistic and discount the future.

Principle 6: inherent cyclic crisis. Eventually the stimulus-response arc becomes so distorted that it encounters a survival-threatening problem it can properly perceive and respond to, but by this time the gap between perception and reality is profound and the is a shock. Surviving the crisis, if possible at all, is quite painful.

Black swans are indeed one type of crisis, but the problem with missing impending black swans is the fault of the system only to the extent that the system could not reasonably have anticipated the black swan event, given the experience it had to draw on. More salient here are crises precipitated by the accumulated distortions in the system's perceptual machinery, where the system "should have known better". At the organizational level, nations might collapse because their ideology, increasingly un-moored from reality, led them to weakness on the battlefield out of refusal to update their armies with modern techniques and technology. Nations with dysfunctional (delusional) organization meet reality catastrophically on battlefields, and religions collapse (sometimes) when encountering reality. Crusades failed due to Christians' belief that God would intervene, medieval Europeans with a military hierarchy based on nobility got crushed by Mongols with rank based on meritocracy, Washington in the Seven Years War lost to the French because he insisted on fighting like a gentleman in rows, then the British lost to Washington in the American Revolution because they still insisted on this formation, and he no longer did. (Many of these could be considered examples of the advance of "rational" (and more destructive) warfare over traditional warfare.) For the young Washington and the later British Empire, the losses did not destroy them but came as painful shocks. In the case of many near-delusional Crusaders or the combined German-Polish-Hungarian forces in thirteenth century Europe, the shock did result in their destruction. On the individual level, any delusional or distorted behavior (psychosis, neurotic defenses, substance use) results in a painful shock in the result of adjusted behavior or shattered beliefs, or in some cases, the death of the individual. Someone might underestimate the risk of driving while intoxicated or in inclement weather, and crash, injuring or kill themselves or their family and updating their belief only in crisis. These crises occur more often and faster, the less (or more distorted) the feedback, as illustrated in very centralized arguments from authority (famine under Stalin using very divorced-from-reality - and unquestionable in Stalin's USSR - Lysenkoist theories of biology and agriculture.)

Principle 7: What to do about it? The Last Noble Truth is that cyclic crises are therefore inevitable in any complex dynamic system. As conscious complex dynamic systems called human beings, composed of complex dynamic systems called cells, being members in conglomerations of complex dynamic systems called nations and corporations and politics or religious belief systems, we will occasionally have shocks that kill us, or even when we "wake up" and adapt, still hurt us quite a bit. This happens in national collapses and revolutions as well. In arbitrary time, the problem will always re-emerge. Your measurement is not the same entity as the thing it measures. Unless a system comprises the whole and then there is no boundary, and of course no system.[13] How can we minimize the inherent problems that lead to this cycle?

  • Constant testing and cross-checking between senses and expectations. In individuals we already do this automatically (corollary discharge, binding between senses and discomfort when our binding expectations do not match observation. Cross-checking beliefs and assaying decisions at multiple points in ways that will quickly expose them if they were bad is helpful for individuals; it probably won't hurt to think twice about that turn you just made in the deep dark woods, even if you feel quite confident about it. Critical thinking is one form of this. Cognitive behavior therapy is another.

  • Increasing the amount of feedback. This facilitates the suggestion above as well. It is good to decrease the consequence for objections in groups. Calibrate yourself - when people or organizations have secure egos and want to get better at something - running a mile, or making better decisions - they do this. This experiment about how to fool a computer in a "delusion box" showed that through a constant drive for being surprised - by learning new information - an agent get out of being deceived fastest. Of course this itself is also hackable (the machine could reset itself; you could convince yourself you're learning important new information but really you're just confirming your biases.) Pushing until you reach failure, in athletics or decision-making, is an instrumental rather than epistemic form of increasing your surprise.

  • To the extent possible, rely on positive feedback. Negative feedback is that which by definition systems avoid, and they will avoid it by gaming it if necessary. Therefore, systems should put themselves in situations where the ratio of positive to negative feedback is higher, so we are less likely to avoid feedback.

  • Simulating negative outcomes. In other words, expect the worst. You will never be disappointed, and you will have prepared yourself for the shock. Negative visualization as proposed in stoicism is a technique to do exactly this.

  • Respecting a system's realistically unmodifiable constraints - especially if the system is you. This is especially true at the individual level. Humans as a species are not built to question close family relationships, especially without reason, without becoming depressed and damaging our relationships (asking if your daughter really loves you once a day will not help you or her.) For that matter, negative visualization actually causes quite a few people to reliably suffer rather than feel better (including me.) Constantly second-guessing every decision, like that turn you made in the woods, may erode your confidence and spark depression. Of course, your daughter really might secretly not love you, and your family might die, and you might have made a wrong turn (and you really can't fail without consequence at some things.) But it's probably going to make you suffer more in the long run not to think about this all the time, and you should pay attention to your reactions to see where your thresholds are.


[1] Though not the purpose of this argument, this does set up a useful boundary for defining living and non-living things. Because it's quantitative it doesn't suffer from a problem of boundary cases, and sentience is nowhere considered, but it does appeal to common sense - stars and fire are at one end of the spectrum and prions and meme-complexes are at the other. Most current electronics are minimally dynamic, which makes them "less alive" than is often argued.

[2] It's implied that these actions are cyclic, like catalysis in biology or the Krebs cycle, or else the system would be unlikely to return to any previous state, and you don't have a self-perpetuating system.

[3] There's a strong argument to be made that defining a system as separate from the rest of the universe is arbitrary. However this becomes less true as the system develops additional complexity for its self-preservation. There is an increasingly sharp inflection point at some physical boundary of the system where the exchange of matter and energy between unit volumes drops, and also where an event on one side of that boundary events have much more impact to the future of the system than events outside of it. That is the self/non-self boundary. In cells this is easily recognized as the cell membrane. In nations, although the boundary becomes more complicated, ultimately the boundary is spatial, because of the primacy of space. Even in corporations or religions this remains the case. The individuals in those organizations, or carrying those beliefs, as physical beings are still dependent on predictability, safety, and resources permitted by more "basic" forms of organization. A good example is the early evolution of life, it is recognized that an RNA molecule (or RNA-protein) would not benefit from any reaction it could catalyze any more than another molecule in its vicinity, or at least not as much as it could if the reaction products were sequestered. Consequently when nucleic acids were enveloped in lipid membranes, natural selection accelerated, and the the self/non-self boundary became less arbitrary.

[4] In a zero-sum setting with limiting resources (which is a necessary condition given arbitrary time) this is a good definition for competition. Unless you count Boltzmann brains, it is likely that a system will find itself in a world with other similar systems.

[5] You may have noticed that there are no examples "below" humans in my examples in principle 3. There are many examples of behaviors in humans, and in human organizations, where metrics are gamed. There are far fewer examples of organisms besides humans where this is the case. Some species of pinnipeds dive deeper than any prey we know of, we think, just to alter their consciousness, and African elephants go out of their way to consume fermented (alcoholic) marula fruit. But there is nothing like the systematic distortions we see in human psychology. It seems likely that the simpler, the more fecund, the faster cycling an organism is, the less it is able to afford having a gap between its response to its metrics and what survival-affecting things are actually occurring in the world. That we don't see many organisms gaming their metrics could occur both because their stimulus-response arcs are simpler, and because a distortion in these arcs will more quickly kill off the organism so they don't come to our attention. This also implies that having a mind as complex and powerful as ours provides unique opportunities for distortions - that organisms which are focused on "survival, unclouded by conscience, remorse, or delusions of morality" (to quote an psychopath movie android) in fact have a more accurate view of the universe (not incidentally, a central theme of the Alien films.) Having a complicated brain and surviving by imitating each other, our nervous system is constantly hijacked by self-reproducing ideas in a way that our genomes never were (lateral transfer events are incredibly rare) and those memes are selecting, as per principal one, for their own self-perpetuation; they want to avoid killing us outright, and use us to spread them, so they are commensal at best. If there is an analogy, the meme complexes we get from our families are not genes but rather our microbiome. It should also be pointed out that as we congratulate ourselves for taking over the planet due to agriculture and combustion engines, we are living through the sixth mass extinction, suggesting that in fact we are not acting in our long-term best interests, that like cancer, ecocide might not hurt enough until it's too late, and that intelligence is an evolutionary dead end.

[6] There is a spectrum of arbitrariness, of how "symbolic" the perceived stimulus is relative to the thing being perceived. In the engineering of signal systems, the closer your signal is in a physical causal chain to the stimulus - the thing it is signaling about, or measuring - the less arbitrary it is. Digital systems are more powerful in many ways than analog systems but they accept increased arbitrariness and complexity in exchange for this. Case in point: people who fear assassination can build elaborate electronic sensing systems to avoid being approached while they sleep, but there are always questions: can they be hacked? What if a component fails? What if the power is out? Can a spy shut it down? Compare this to the system used by the Tokugawa shoguns - sleeping in the middle of of a large room with a wooden squeaky floor with many tatami barriers, and choosing a place to sleep on that floor at random each night.

[7] Of course other things have changed about the way science (and medicine) are practice over the past half-century, not to mention that all the "low-hanging fruit" in terms of problems accessible to the specific strengths of human cognition may have been picked soon after the Enlightenment started. But it remains a concern that by (not unreasonable) trying to regularize and make transparent the application and career progression process, we're selecting for attributes that have little to do with being a successful scientist or physician, or even select against them, because we're using "artificial" endpoints distant from the relevant abilities, which can be and are gamed. Certainly this problem is not unique only to science and medicine, and whatever is causing the phenomenon, it's having real-world economic consequences. An interesting historical study would be to see if the health of the Chinese government across the dynasties waxed and waned with any relation to some aspect of civil service examinations.

[8] Referred visceral pain is an example of an aversive stimuli-sensing system that gives very inexact answers, because it's not important enough to improve. If your arm is hurt, you can point to exactly where even with your eyes closed. But when people get appendicitis, very often in the early phase they point to the center of their abdomen around their belly button, and then gradually the pain moves to the area immediately over the appendix - but only after the overlying tissue, which is innvervated by somatic ("outside"-type nerves) is irritated. Often people with a problem in their abdominal organs or even their heart feel extremely sick and anxious and in general uncomfortable but can't point to any specific spot. Why does this make sense? If a scorpion scrambles up onto your left elbow is stinging you there, it's worth knowing exactly where the stinging is happening so you can act in a way that improves the situation. But if you were sitting around a fire with your tribe in the African Rift Valley 100,000 years ago with appendicitis, what exactly could you do about it? If you had bad stomach pain, it didn't matter exactly where it was; you curled up in a ball where your family members were nearby to care for and protect you and hoped it passed.

[9] In contrast to corporations, single-celled organisms survive best not when they have a high surface-area-to-volume ratio (like successful nimble corporations) but a low one, which is why they are mostly near-spherical. Corporations, while competing with each other and in some ways with their customers, are still operating in an environment that is predominantly cooperative, so it's better to have lots of customer interaction surface. Bacteria exist in an environment of constant unpredictable ruthless lawless natural selection. It's really about how much the surface is an asset for information gathering by the system, versus a liability to attack from competing systems. Consequently, for bacteria, the sacrifice of knowing less about the outside world (which at that scale is less predictable than our world anyway) must be worth it given the overall survival advantage gained by being in the shape that most maximizes distance of any unit volume from the surface. In contrast, there are cells in biology that maximize surface: neurons, and nutrient absorption membranes deep in the GI tract. Both of these exist deep in the organism (especially neurons) in a web of profound cooperation (also especially neurons.) The more fractal a complex dynamic system, the more likely it is to exist in an environment of predominant cooperation. The more spherical a complex dynamic system, the more likely it is to exist in an environment of predominant competition. In the case of corporations, the shape is somewhat "virtual", but corresponds to points of contact per customer and ease of contact, which ultimately are still going to require space. Nations are somewhere in the middle, though it would be interesting to see if nations now, more cooperative and less violent than they historically were, are more likely to have fractal borders, or shared zones (my predictions) than one or two centuries ago.

[10]This corresponds to Level 3 operations as described here, which explain how large organizations work but is not an argument that they should work that way, for Level 3 organizational decisions often lead to the downfall of the organization, unless the inner circle in the super-Pareto distribution has the best interests of the organization at heart. In politics this happens either because they have a feedback loop and are beholden to an informed electorate as in functioning democracies, or less likely, by luck as with benevolent dictators, e.g. Lee Kuan Yew.

[11] There's an apparent conflict here. On one hand I'm arguing that systems become distorted because they're focused on self-perpetuation. On the other I'm arguing that they focus on the metrics, which they game

[12] Organisms are exempt from becoming "rackets." Rackets are systems which have a claimed mission besides their self-perpetuation but in fact are only self-perpetuating, and organisms are openly survival systems, full stop, and make no claims to the contrary.

[13]It may not have escaped your notice that one implied solution - expand the system until it comprises the whole universe, and there is no self/non-self boundary is, at least on the individual level, one advocated by many mystical traditions. We actually achieve this when we die, so in individual terms this could be re-formulated as "lose your fear of death". Yet our read-only hardware makes this a terrifying and unpleasant experience, even, empirically speaking, for life-long meditators. For now, this is not a real solution.

APPENDIX: Analogous Terms

Non-human OrganismHumanOrganization
Inputstimulusperception, belief, representationmeasurement, metric, dogma
Reflection, Output, Reactionresponsebelief, behaviordecision
Examples of GamingRare; some higher animals seek out "highs"Opioids, denial, delusionPreserving letter but not spirit
Examples of crisisDeathPersonal disillusionment or deathRevolution or collapse

Saturday, September 28, 2019

Three Levels of Operations in Organizations

Humans are unspeakably complex objects, and the associations they form and the way their behavior affects those associations and is affected by, it even more complex. Nonetheless we can make some accurate predictions, the moreso when the number of individuals is higher. It's pretty difficult to predict if a single person will be worth more or less, or making more or less, a year from now (even with good information today), but we can make a decent guess about what the economy will do. That said, when we talk about groups of people - especially states - we oversimplify.

Level 1 thinking about organizational decisions and behavior - the single entity fallacy - is demonstrated when "Germany invaded Poland." "The Americans wanted to annex the territory all the way to the Pacific." "General Motors wants to buy Tesla." (I made that last one up.) These assertions are not meaningless, but they grossly (and probably necessarily) oversimplify the collective action of thousands or millions of people. We might picture a giant made of fused-together bodies serving the collective good. Something approaching this subsumption of identity and individual interests occurs more easily in smaller and more homogenous groups, and is easiest when individuals in those groups were programmed by pre-verbal and pre-rational early life experience to identify with the tribe and its authority. The leader in such a situation is showing transformational leadership, and this corresponds to Chapman's level 3. (See more about his model of levels at which humans derive meaning through their associations with each other.) Indeed, it has been argued from a philosophical standpoint that any group of people (married couple up to nation) cannot be said to have (consistent) preferences.

Level 2 thinking - the amalgamation of individuals fallacy - is more rarely seen, because it's more complicated and manifestly not how nations or individuals function. In this model, there are only individuals, constantly calculating what they're getting out of association with the group, and there is no group; or rather, the group exists only as a product of individual interests, and talking about the group's actions adds nothing to our understanding or to the accuracy of our predictions. Leaders in such situations are transactional, and this corresponds to Chapman's level 2. For obvious reasons, such associations tend to be unstable over time. This is more common in companies than states (since the former have a mostly or purely transactional mission), but even in companies, there is usually an identity-subsuming transformational aspect. And even in states, we do often see this in the a-ideological alliances and constant defections that occurred before the Enlightenment and democracy, and still occur in the developing world.

Level 3 thinking - There is a super-Pareto principle in how much the decisions of any individual in a (tribe, company, nation) affect every other individual, and this is helpful to come to a more useful and accurate model than the mere amalgamation of individuals. The people making the decisions are certainly influenced to some degree by personal interests that do not necessarily align with all the other individual members of the state (or company - for instance, institutional investors take note when a fund manager is nearing retirement, because s/he may start making decisions that benefit his/her retirement in the short term but not the company or its shareholders in the long term.) But they too likely have pre-rational tribal affiliation and also the concrete reality on the ground that this, and not some other tribe/company/nation is the one that they're in and helping to run. Furthermore, there is a constant feedback from the individual to the group (as in level 2) and then back from the group to the individual in the form of things like social norms. This is therefore the cyclic individual-within-group theory, and when companies or states seem to "make bad decisions", almost invariably they could be explained in the light of benefits or risks to individuals that don't track those to the organization overall. Interestingly, overproduction of elites is a core feature of Peter Turchin's predictions for the West and the U.S. in particular, and is an outstanding example of the cyclic individuals-within-groups model.

Thursday, August 29, 2019

Alternative History #8: Ancient East Indian Settlement of Australia

For the previous installment, see Alternative History #7: German-Led Native Shock Troops in California.

I've often wondered why Australia wasn't colonized by Chinese, or Indonesians, or Maori, prior to Europeans. Approaching from the north, Chinese and Indonesians would have encountered horrendous impenetrable swamps crawling with saltwater crocodiles. The Maoris might have had an easier time landing in what is now Victoria or New South Wales, but they only arrived in Aotearoa five centuries before Europeans. But if the Aboriginal Australians themselves made it to Australia 60,000 years ago, how hard could it be for someone who had an actual boat? Why not Indian explorers or traders? Even in that early era it's likely the Asians would have had substantially more advanced stone tools and stoneworking techniques than the Australians, which they would have introduced and which would have quickly spread through trade and warfare. Australia might also have been colonized by non-native fauna that they brought with them - non-marsupial mammals that would stick out against the evolutionary background of the isolated continent.

Once again, this isn't alternate history. I recently ran across a paper by Irina Pugach, working in Mark Stoneking's lab at Max Planck, showing genetic evidence of contact around four to five thousand years ago. Most intriguing, this is nearly simultaneous with a change in aboriginal stone tools and the introduction of dingoes to the continent. It's very hard to believe that's a coincidence. (Disclosure, for a year I worked for Stoneking as an undergrad, on a project showing the mtDNA evidence supporting a Polynesian origin for the settlers of Madagascar. This was with radioactive sequencing. If you could get 200 clean bp every 2 days, you were a wizard. I got a Howard Hughes monetary award for it in a research fair, not a grant.)

REFERENCE Pugach I, Delfin F, Gunnarsdóttir E, Kayser M, Stoneking M. Genome-wide data substantiate Holocene gene flow from India to Australia. PNAS January 29, 2013 110 (5) 1803-1808;

Sunday, August 18, 2019

Religious Adherents: Bad Stripe Not Visible, But Is This Data Meaningful?

Found this map at Whenever I see cultural maps of the U.S. like this one, I look for the Bad Stripe, a coherent area that pops out as below-average in maps of human development indices. It stretches roughly from far western PA through West Virginia, Kentucky, and Tennessee across Arkansas and into Oklahoma (see more here.) My expectation would be higher religious belief across the Bad Stripe; or, at least, some pattern that makes the Bad Stripe stick out from surrounding areas. Not present at all; and typically, lower human development = more religious. Bad Stripe or not, there are some big surprises on this map, and it doesn't really align with both what most Americans would expect, as well as my own experience traveling and living in the country. West Virginia is much less religious than Western Pennsylvania? Really? Central California has a religious stripe across the middle? I've lived near both these places and find this hard to believe. The Frontier Strip is evident but the Rockies, especially to the north, are mostly less religious than other rural areas - also very suspicious is the similar level of religiousness between rural and coastal areas of Pacific states. If California is going to have religious and unreligious zones, they're more likely to run north and south parallel to the (liberal, likely less religious) coast.

One problem across all such surveys is that how one defines "religious" (or in this case "religious adherents") matters a great deal. Was it something like asking "How important is religion to you?" Or "Do you belong to a church?" Looking at this map, I strongly suspect it was the second, and that many people in some area (eg West Virginia) that do not belong or regularly attend services would say that religion is quite important to them. A place that happens to have a single large church would look very religious, whereas a place where people were very religious but did not have many churches would look very un-religious.

Tuesday, July 23, 2019

Cause and Effect in Complex Systems - Think in Terms of Reinforcement Cycles

In the last post, I wrote about the virtuous cycle of labor gradually getting more valuable, driving the development of machines to extend that value, which in turn makes it more valuable still. This is an example of a potentially useful trick in reasoning about complex systems that avoids a cognitive pitfall.

In a complicated system in equilibrium, it is unlikely that a single element of many in the system could be responsible for a lasting perturbation or evolution to a new equilibrium. That is, reasoning about single-causes unidirectionally affecting the rest of the system is probably not a good way to improve our understanding. It's not like kicking a ball - foot causes ball to move, end of story - which is the way our brains seem to have developed to understand events on the order of a few seconds in the mesoscale world. (Notice even there that the ball moves once, stops, and the phenomenon is over.)

Mutually reinforcing sets of elements within the system are more likely to produce a lasting perturbation, that is, move the system to a new equilibrium. Looking for such reinforcement cycles can get us out of unproductive chicken-and-egg reasoning. Applying the idea to this case, it becomes evident that asking "Was it labor getting cheaper that caused the industrial revolution? Or the other way around?" is simplistic and unlikely to provide a clear and useful answer. It's better to ask, "What are the economic and social elements that reinforced each other in such a way as to produce the industrial revolution?" - along with other elements such as aspects of British culture at that time.

Another example would be the question: "Was increased use of tools and manual dexterity in early human ancestors a result of these ancestors favoring bipedal locomotion? Or the other way around?" They likely reinforced each other, along with other elements; i.e., increased adaptation to the savannah favoring bipedal locomotion to see further and tool use favoring survival in a drier, more open, less calorie-dense environment - et cetera.

The Value of Labor Over Time Is Still Increasing Over Time, Even Since the 1980s

As time moved forward, early economists increasingly appreciated the value added by the labor component. In the mid-1700s the physiocrats (like Turgot) thought that the value of land was pretty much the whole story, then a few decades later Smith recognized labor as an equally important component, and a half century after that Marx overshot the mark a bit when he argued that labor was the most important part of the story.

But each position was more correct in its own time, because the value-adding power of labor did increase over time. Economists have wondered why modern capitalism didn't develop in stable earlier market economies like ancient Rome, and the best argument so far is that the institution of slavery meant labor was cheap. As technology improved, so did the value-adding power of labor, and as labor became more valuable, the incentive to develop more technology to amplify labor also grew, in a virtuous cycle (see more about reinforcement cycles in complex systems here.) It's therefore not a surprise that the institution of slavery ended shortly thereafter, first in Britain, and later elsewhere. They couldn't afford to waste labor any longer![1] It's also telling that the natural experiment called the American Civil War - divide a country in half, one half has slavery, one half doesn't - which one develops its industry - and therefore wins when the two collide in war? It also makes sense that the least developed economies would be the ones most likely to persist in permitting slavery, e.g. Mauritania.

Many economists have also argued that starting in the early 1970s, the West had a technological and/or economic stagnation, and showed that the share of labor in the economy has been declining since then. The worry is that enough capital has accumulated that it has overtaken labor as the most meaningful input, and did not do so earlier because of world wars. This dominance of the capital component is the central concern of Piketty's Capital in the Twenty-First Century, in which he describes the growing divide between those who rely on income (most of us) and those who rely on capital accumulation (the very rich.)

But it's quite natural to wonder whether a lot of the return to labor is hidden, for instance in the accumulated wealth of high-value laborers - i.e., I don't think Peter Thiel or Bill Gates worry about their salary. This is why a new paper by Eisfeldt, Falato and Xiaolan (h/t Marginal Revolution) is very encouraging. Taking into account equity awarded as compensation to highly skilled labor, the apparent decline in labor share evaporates.

I would expect that there is still further hidden value accrued to labor and this paper has found only part of it.

[1] As a pointed aside: For those who point out (correctly) that most abolitionists were Christians and claim that the end of slavery was driven by morality rather than materialist considerations: the tacit acknowledgement of slavery in the Bible a big problem for this hypothesis, let alone the whole history of Christianity before the Enlightenment and industrial revolution when Christians had eighteen centuries to figure out slavery was wrong, but did not.

Saturday, April 20, 2019

Violence Control and the Mutual Recognition Cartel

Modern states are a cartel that mutually recognize each other's right to hold a monopoly on violence within their territory. The prospect of seasteading is a fundamental threat to nearby nations' legitimacy, and in fact to the Westphalian idea of a state, since suddenly there is territory near yours which was previously not only uninhabited but uninhabitable, and poof - suddenly there are people outside your control on your borders. Consequently we should expect that seasteads will produce a rapid and disproportionate response from any nearby nation, using any excuse they can to commandeer or destroy it. That I know of, there has been very little discussion in the seasteading/voluntary society community about the likelihood of this happening, and how to avoid it. Previously , it was predicted that it would be "a few years at most before the nearby country finds an excuse to attack them", but in this case it was actually on the order of weeks.

This past week, the Thai government noticed a seastead platform just outside its international waters:

US bitcoin trader and girlfriend could face death penalty over Thai 'seastead'
US national Chad Elwartowski and his Thai girlfriend, Supranee Thepdet (aka Nadia Summergirl), are facing charges of threatening the Kingdom's sovereignty. Last Sunday officials from the Royal Thai Navy and Phuket Maritime boarded the structure saying it violated Article 119 of the Criminal Code and also posed a navigational hazard.

The couple launched the 'Ocean Builders' seastead on February 2 off the coast of Phuket. The structure is located to the southeast of Koh Racha Yai, approximately 12 nautical miles (22.2 kilometres) from the mainland.

Elwartowski has claimed that his seastead is outside Thailand's territorial waters, but Thai authorities insist that it violates Article 119 and challenges Thailand's territorial rights.

"The Royal Thai Navy has full authority and duty to protect national interest and marine sovereignty in the area," according to a Navy spokesperson. (from

Besides the disproportionate response, most telling is that Thailand has not even bothered to claim that the seastead is within their territorial waters. The couple previously inhabiting the seastead has fled in fear of their lives. Immediately Thailand shows what's really going on - whatever theory of state recognition you subscribe to, it all unfortunately returns to violence and the control of violence - control of it within your territory (the police, to maintain order) and outside your territory (the military, to at least prevent conquest, if not expand your territory.) If a country can't do those two things, it's not a country, and can't even convincingly pretend to be a country for long - e.g., no one is very impressed with the "Somali" government's claim to actually be the state of Somalia, that is, the organization which holds a monopoly on violence within the territory not claimed by surrounding countries.

One objection: Luxembourg (for example) appears to be a viable state, yet can Luxembourg really claim to be able to repel an invasion from Germany or France? No, but very likely the blowback from other countries in the mutual-recognition-cartel that could harm Germany or France in some way is enough to stop them. Dictators often test the resolve of the cartel - most obviously, Putin by invading Crimea. Ukraine could not repel such an invasion, nor could they count on the cartel to come to their aid. So they can say that Crimea is still part of Ukraine, but de facto, it is part of Russia.

It's also worth pointing out that both Vietnam and China have built not seasteads, but whole new islands in international waters in the South China Sea. They and their allies have made a lot of noise about the other state's islands, but aside from a few harassing passes by aircraft, there has been no full naval take-down of the settlements. Why? Because each country (or its allies) have the ability to hold their territory by inflicting and defend against violence, and as countries, are already in the mutual-recognition club by virtue of being able to use the threat of force to defend their territory. (Which is how they can have allies to begin with.)

In actuality, the Thai seastead and state response isn't the first one. This is stated not to diminish the accomplishment, but rather to point out that there was another would-be sea-platform microstate in the 1960s in the North Sea, which was allowed by the nearby United Kingdom to persist - though they may have been a bit nervous about this and hope to escape the UK's attention. Consequently we might ask - why don't seasteaders set up shop off the Somalian coast, most of which has no real government? For the obvious reason of piracy, i.e., unpredictable exercise of force by individuals with no claim to political legitimacy, protection of territory/prevention of others' violence, or promise of maintaining/improving conditions. In this, it's obvious that seasteaders would like to benefit from the nearby state's violence control, without paying taxes or following other rules. (In fact, Sealand had difficulty controlling violence within its borders, or preventing criminal behavior, and the UK's hands-off attitude had a lot to do with that.) A critique of "fundamentalist" little-l libertarianism in general is that it's only conceivable when there is already a state regulating commons and controlling violence that guarantees social arrangements - and a very similar argument can be made for socialism in states already wealthy-by-capitalism, also an unsustainable strategy. (Something very similar happened on Minerva reef in the Pacific in the 70s when Tonga suddenly decided it had to control the reef once settlers showed up.)

It's probably not good for anyone to dwell for too long on the basic fact that humans have never devised a system to organize themselves beyond the level of family without a threat of violence, and it seems that theories of state recognition are designed to be legalistic dances that distract us from this brute fact. The political scientist who imagines a system that allows the non-violent creation of new states that can actually take natural precedence over force will go down as the greatest philosopher in history - along with the one that figures out a way besides physical space to determine which laws apply to which people, thus making a more truly voluntary society. I'm very sad for the couple that tried to seastead off of Thailand, but they were more than a little naive. For now, I predict that any seastead, even one far out in the middle of the ocean inarguably in international waters, will quickly find itself dismantled by a state navy unless they have sufficient outward directed force - and will not be able to control their internal violence without pre-emptive threats of violence.