Let's define a system as a set of components discrete from the rest of the universe. Let's define dynamic complex systems as ones that are high-information relative to the rest of the universe around them, high enough that in the absence of opposing forces, entropy passively favors irreversible changes happening to the arrangement of the system's elements which make it no longer discrete. If such a set of components that exists now is going to exist more-or-less unchanged in the future, it has to perform actions - it has to be dynamic - which reduce its own entropy at the expense of the rest of the universe. The rest of the universe may include similar systems. (Going forward, when I say "system" I mean "complex dynamic system.")
The seven principles are outlined below, then described in more detail.
- A system must be mostly concerned with its own perpetuation, or it will not persist. "If a complex dynamic system has been around for a while, it's designed to last and expending energy to do so."
- It must do this by reflecting - reacting to - aspects of the rest of the universe. Aspects of the rest of the universe important enough to make the system tend to develop ways to change its state in response to them are called stimuli. An reverse but familiar statement of this principle: "That which gets measured, gets addressed."
- A measurement always contains less information, and is therefore not a full or fully accurate representation, of the thing being measured. Perception cannot exactly be reality; "this is not a pipe."
- Over time, the focus on self-perpetuation leads a system to become concerned with itself to the point of minimizing the importance of or responses to aversive stimuli to avoid altering its state (which is also aversive.) "Everything that gets measured, eventually gets gamed."
- The system's responses become increasingly un-moored from the external world, favoring its own perpetuation over other functions, and/or having a severely distorted model of the world and reaction to stimuli. "Eventually, everything becomes a racket [and/or gets delusional]."
- The distortions accumulate until a sufficiently destructive stimulus occurs (a shock), which either reorients the system (usually with severe aversive stimuli), or destroys the system. If the system survives, such shocks will happen repeatedly, but necessarily unpredictably. This is called inherent cyclic crisis.
- This is inherent to any self-perpetuating dynamic complex system, and because these shocks are correctly perceived as worsening survival, they cause inevitable suffering. We can call this the Final Noble Truth, a vague parallel to the Buddhist First Noble Truth.
Following is an expansion on each principle.
Principle 1: "If a complex dynamic system has been around for a while, it's designed to last and expending energy to do so." If a system is going to continue existing, a top priority on self-preservation is mandatory, and self-preservation must be the primary influence on perception by and reactions of the system. Since Darwin, thinking of organisms in this way is not revolutionary. But the same principle applies to any other complex system, including human organizations. Corporations have a relatively clear function in this way (they can't keep making money for shareholders if they don't keep existing) but it's more surprising for most of us to think of religions, countries, or volunteer organizations in this way. The converse: if a complex dynamic system is not expending energy on its self-perpetuation, it will not exist for long. (Many apparent mysteries, like the transparently weak business plans of many a Web 1.0 company, are resolved on realizing that they are not in equilibrium and will perish quickly. And indeed they did.)
Principal 2: "What gets measured, gets addressed." This seems obvious enough, especially to those of us at all interested in organizational dynamics, but principal 1 directs the kinds of things that have to be measured if the system will persist, and principal 2 says that there is always some difference between the measurement, and the totality of the outside world; that is, there is always going to be at least some important information missed, and what is missed cannot be acted upon. Concretely speaking, genes reflect the outside world by establishing sensor networks that interact across the inflection point (the cell membrane, or in the case of multicellular organisms, the body.) Some sensor networks have become very rapid and fine-grained reflections of the outside world - xenobiotic metabolism enzymes (which have reacted in only millennia, and the genetics differ considerably even between groups of humans), adaptive immune systems (which also differ between groups of humans and react in minutes), and of course nervous systems are the paragon. But all of them make sacrifices and do not (cannot) sample all of the possible information available. It should be pointed out that living things do not have to constantly repeat the mantra "what gets measured gets addressed" because that's how they're already built and behave, automatically and obligatorily, as was the case for eons' worth of their ancestors. This is not the case with human organizations, which are new developments in nature, and may not be in equilibrium - so those organizations that you notice do not measure important (survival-supporting) data are unlikely to exist for long either. Whatever corporations or their descendants exist in a million years, it won't be ones that didn't respond to relevant metrics.
Principle 3: Perception cannot exactly be reality; "this is not a pipe." Representations - measurements - are never the same as the things themselves, and incentives are never aligned perfectly with desired outcomes (almost trivial; otherwise, perfect alignment means identity, that is, the incentives are the desired outcomes.) There must always be a limit on information collected, and inferences are not always correct. There is infinite information a system could in theory collect about the universe (looking for correlations between each datum or set of data), but the system is more likely to perpetuate itself the more the information it collects, and the more impactful the information it collects. It is this design choice by the system to sense survival-relevant data that turns one of infinite facts about the world around the system into a stimulus. Obviously, which things it chooses takes as stimuli - what it measures - are relevant. (Not to mention if the system is in competition, especially with others using the same resources, there are time and resource limitations on the system on how much data to collect before altering its state.) The implication is that there is a limited set of information collected out of all possible information - what the system receives as stimuli - and these stimuli are necessarily very heavily biased toward self-perpetuation.
Principal 4: "Everything that gets measured, eventually gets gamed." This is similarly familiar, and here is where the tension is set up. Systems must perceive (measure) and react to their environment. Their measurements are not the same as the thing in the environment, only a reflection. Because of this, systems react to the measurement - the perception - not the thing that is being measured. This is not a trivial difference. Anyone who has worked at a large corporation or applied to professional school is familiar with this, and we all know examples where an endpoint was achieved in an only-technical, meaningless way that did not advance toward the real-world goal outside the organization that the endpoint was meant to incentivize. To "follow the letter but not the spirit" is an aphorism which expresses this. Case in point: many companies have sexual harassment or racial sensitivity training. These often take the form of instructional videos with quizzes after them. Most people skip and fast-forward through the videos as fast as possible to register as if they watched the whole thing, often having two different types of browsers open so when they get to the quiz they can go back if the answers aren't obvious. Of course this raises the question of whether there are some types of training where the written tests to get the credential have nothing to do with performing the actual work. For example, in the early-to-mid twentieth century, people became scientists because they liked being in labs, were good at organizing experiments, and in general got immediate feedback directly from their work, and therefore performed better, and therefore were recognized for it by peers and superiors, moreso than is the case now. Do the best GRE scores (and administrative maneuvering, and recommendations, and tolerance of modern graduate school politics) really correlate with the best scientists? Or, does the same process produced skilled and caring physicians? For examples in the individual human, take your pick of any of a host of brain receptor-tricking molecules like opioids or alcohol, as well as immature psychological defenses like denial. Cancer is another example. Unlike infection or physical injury, cancer doesn't hurt until it's about to kill you, thanks to both the earlier reproduction of our ancestors relative to the later onset of cancer, along with the black swan of radiation and brand-new-to-nature chemicals.
Another type of distortion has to do with the structure of the system, which effects the way it behaves, rather than perceptions per se. It's long been noticed that corporations become less "nimble" (responsive to market change; ie the relevant universe outside the corporation) as their surface area-to-volume ratio goes down. The higher the surface area-to-volume ratio, the more information that can be collected and the more effective responses can be. Think of bloated giants like big automakers or old engineering companies, where in Dilbert-like fashion people think more about maneuvering in their jobs, coordinating with other departments within the company ("transfer pricing") or competing with other people within the company than they do about their outside-the-company competitors or serving the market. This certainly occurs within states as well, where to various extents for Chinese dynasties, the Roman Empire, and the Spanish colonial empire their downfall was more the result of special interest maneuvering and other intrigues directing attention consistent inward to the court - because what could be going on that's more important outside the palace walls where the barbarians are than inside where the power is? So we have Zheng He's fleet being recalled, Roman patricians scheming in the absence of a succession rule to get legions on your side, or Spanish royalty neglecting overseas possessions until the British and their offspring eat your lunch. There's a final type of distortion which arises from the way that nervous systems save time and money: the more some stimulus-response pairing occurs, the less reward-sensitive it becomes. It moves from being a conscious act requiring effort and concentration, to a habit, to (in biology, programmed through evolution) a stereotyped movement, then a reflex. Once a stimulus-response pairing has started moving down the line it is almost impossible to move it back other than by over-writing it with another stimulus-response pairing.
Principle 5: "Eventually, everything becomes a racket [and/or gets delusional]." This quote is attributed to the late George Carlin. An occult paraphrasing of this in more specifically politics terms is "The state is primarily concerned with itself." Here we can see the full flowering of the problems buried in the earlier principles. There is a constant tension between the need for negative feedback, which the system avoids - that's what aversive stimuli are for - but because the measurement and the thing measured are not the same, the metric is game-able, and the system avoids these stimuli increasingly by gaming them rather than taking real action. (And simultaneously makes un-helpful-to-survival end-runs to pursue positive stimuli.) Consequently the stimulus-response arc gets more and more distorted with respect to the actual longer-term perpetuation of the system. This seems paradoxical in light of #1 above, but because systems are never perfect reflections of the universe around them, they necessarily always react based on at least limited information (especially with respect to long-term consequences) and sometimes with outright distorted information their machinery is feeding them. The necessary self-focus means that these distortions will tend to be in favor of pursuing pleasure; of avoiding pain and believing everything is alright when it is not, and over arbitrary time whatever non-self-perpetuating parts of a system's "purpose" previously existed, will atrophy, and its behavior will become more distorted in favor of comfort and perceived survival over actual survival.
The distortions come not just from "gaming" pain. Organisms can hack themselves to fire off their reward centers without an actual survival enhancement, for example, with heroin, masturbation, or consuming resources they are driven by prior scarcity to consume as frequently as possible, but which have become "cheap" to the point where their over-consumption causes problems, e.g. in humans, fat, salt, and sugar. Opioids are in humans the thing closest to the artificial intelligence problem of "wireheading" where a self-modifying agent given a task can self-modify to be satisfied even though the task is completed. Good examples of rackets are religions and charities that depart from their stated mission in favor of wealth-accumulation and self-perpetuation. (See Givewell's list of charities which maximize their mission rather than their income.) Profit-seeking entities whose products or services intrude into "sacred" (i.e. non-transactional) realms (best example: healthcare) often find that self-perpetuation wins out over their claimed mission. Organizations and individuals can also become delusional - humans are incorrigibly overoptimistic and discount the future.
Principle 6: inherent cyclic crisis. Eventually the stimulus-response arc becomes so distorted that it encounters a survival-threatening problem it can properly perceive and respond to, but by this time the gap between perception and reality is profound and the is a shock. Surviving the crisis, if possible at all, is quite painful.
Black swans are indeed one type of crisis, but the problem with missing impending black swans is the fault of the system only to the extent that the system could not reasonably have anticipated the black swan event, given the experience it had to draw on. More salient here are crises precipitated by the accumulated distortions in the system's perceptual machinery, where the system "should have known better". At the organizational level, nations might collapse because their ideology, increasingly un-moored from reality, led them to weakness on the battlefield out of refusal to update their armies with modern techniques and technology. Nations with dysfunctional (delusional) organization meet reality catastrophically on battlefields, and religions collapse (sometimes) when encountering reality. Crusades failed due to Christians' belief that God would intervene, medieval Europeans with a military hierarchy based on nobility got crushed by Mongols with rank based on meritocracy, Washington in the Seven Years War lost to the French because he insisted on fighting like a gentleman in rows, then the British lost to Washington in the American Revolution because they still insisted on this formation, and he no longer did. (Many of these could be considered examples of the advance of "rational" (and more destructive) warfare over traditional warfare.) For the young Washington and the later British Empire, the losses did not destroy them but came as painful shocks. In the case of many near-delusional Crusaders or the combined German-Polish-Hungarian forces in thirteenth century Europe, the shock did result in their destruction. On the individual level, any delusional or distorted behavior (psychosis, neurotic defenses, substance use) results in a painful shock in the result of adjusted behavior or shattered beliefs, or in some cases, the death of the individual. Someone might underestimate the risk of driving while intoxicated or in inclement weather, and crash, injuring or kill themselves or their family and updating their belief only in crisis. These crises occur more often and faster, the less (or more distorted) the feedback, as illustrated in very centralized arguments from authority (famine under Stalin using very divorced-from-reality - and unquestionable in Stalin's USSR - Lysenkoist theories of biology and agriculture.)
Principle 7: What to do about it? The Last Noble Truth is that cyclic crises are therefore inevitable in any complex dynamic system. As conscious complex dynamic systems called human beings, composed of complex dynamic systems called cells, being members in conglomerations of complex dynamic systems called nations and corporations and politics or religious belief systems, we will occasionally have shocks that kill us, or even when we "wake up" and adapt, still hurt us quite a bit. This happens in national collapses and revolutions as well. In arbitrary time, the problem will always re-emerge. Your measurement is not the same entity as the thing it measures. Unless a system comprises the whole and then there is no boundary, and of course no system. How can we minimize the inherent problems that lead to this cycle?
- Constant testing and cross-checking between senses and expectations. In individuals we already do this automatically (corollary discharge, binding between senses and discomfort when our binding expectations do not match observation. Cross-checking beliefs and assaying decisions at multiple points in ways that will quickly expose them if they were bad is helpful for individuals; it probably won't hurt to think twice about that turn you just made in the deep dark woods, even if you feel quite confident about it. Critical thinking is one form of this. Cognitive behavior therapy is another.
- Increasing the amount of feedback. This facilitates the suggestion above as well. It is good to decrease the consequence for objections in groups. Calibrate yourself - when people or organizations have secure egos and want to get better at something - running a mile, or making better decisions - they do this. This experiment about how to fool a computer in a "delusion box" showed that through a constant drive for being surprised - by learning new information - an agent get out of being deceived fastest. Of course this itself is also hackable (the machine could reset itself; you could convince yourself you're learning important new information but really you're just confirming your biases.) Pushing until you reach failure, in athletics or decision-making, is an instrumental rather than epistemic form of increasing your surprise.
- To the extent possible, rely on positive feedback. Negative feedback is that which by definition systems avoid, and they will avoid it by gaming it if necessary. Therefore, systems should put themselves in situations where the ratio of positive to negative feedback is higher, so we are less likely to avoid feedback.
- Simulating negative outcomes. In other words, expect the worst. You will never be disappointed, and you will have prepared yourself for the shock. Negative visualization as proposed in stoicism is a technique to do exactly this.
- Respecting a system's realistically unmodifiable constraints - especially if the system is you. This is especially true at the individual level. Humans as a species are not built to question close family relationships, especially without reason, without becoming depressed and damaging our relationships (asking if your daughter really loves you once a day will not help you or her.) For that matter, negative visualization actually causes quite a few people to reliably suffer rather than feel better (including me.) Constantly second-guessing every decision, like that turn you made in the woods, may erode your confidence and spark depression. Of course, your daughter really might secretly not love you, and your family might die, and you might have made a wrong turn (and you really can't fail without consequence at some things.) But it's probably going to make you suffer more in the long run not to think about this all the time, and you should pay attention to your reactions to see where your thresholds are.
 Though not the purpose of this argument, this does set up a useful boundary for defining living and non-living things. Because it's quantitative it doesn't suffer from a problem of boundary cases, and sentience is nowhere considered, but it does appeal to common sense - stars and fire are at one end of the spectrum and prions and meme-complexes are at the other. Most current electronics are minimally dynamic, which makes them "less alive" than is often argued.
 It's implied that these actions are cyclic, like catalysis in biology or the Krebs cycle, or else the system would be unlikely to return to any previous state, and you don't have a self-perpetuating system.
 There's a strong argument to be made that defining a system as separate from the rest of the universe is arbitrary. However this becomes less true as the system develops additional complexity for its self-preservation. There is an increasingly sharp inflection point at some physical boundary of the system where the exchange of matter and energy between unit volumes drops, and also where an event on one side of that boundary events have much more impact to the future of the system than events outside of it. That is the self/non-self boundary. In cells this is easily recognized as the cell membrane. In nations, although the boundary becomes more complicated, ultimately the boundary is spatial, because of the primacy of space. Even in corporations or religions this remains the case. The individuals in those organizations, or carrying those beliefs, as physical beings are still dependent on predictability, safety, and resources permitted by more "basic" forms of organization. A good example is the early evolution of life, it is recognized that an RNA molecule (or RNA-protein) would not benefit from any reaction it could catalyze any more than another molecule in its vicinity, or at least not as much as it could if the reaction products were sequestered. Consequently when nucleic acids were enveloped in lipid membranes, natural selection accelerated, and the the self/non-self boundary became less arbitrary.
 In a zero-sum setting with limiting resources (which is a necessary condition given arbitrary time) this is a good definition for competition. Unless you count Boltzmann brains, it is likely that a system will find itself in a world with other similar systems.
 You may have noticed that there are no examples "below" humans in my examples in principle 3. There are many examples of behaviors in humans, and in human organizations, where metrics are gamed. There are far fewer examples of organisms besides humans where this is the case. Some species of pinnipeds dive deeper than any prey we know of, we think, just to alter their consciousness, and African elephants go out of their way to consume fermented (alcoholic) marula fruit. But there is nothing like the systematic distortions we see in human psychology. It seems likely that the simpler, the more fecund, the faster cycling an organism is, the less it is able to afford having a gap between its response to its metrics and what survival-affecting things are actually occurring in the world. That we don't see many organisms gaming their metrics could occur both because their stimulus-response arcs are simpler, and because a distortion in these arcs will more quickly kill off the organism so they don't come to our attention. This also implies that having a mind as complex and powerful as ours provides unique opportunities for distortions - that organisms which are focused on "survival, unclouded by conscience, remorse, or delusions of morality" (to quote an psychopath movie android) in fact have a more accurate view of the universe (not incidentally, a central theme of the Alien films.) Having a complicated brain and surviving by imitating each other, our nervous system is constantly hijacked by self-reproducing ideas in a way that our genomes never were (lateral transfer events are incredibly rare) and those memes are selecting, as per principal one, for their own self-perpetuation; they want to avoid killing us outright, and use us to spread them, so they are commensal at best. If there is an analogy, the meme complexes we get from our families are not genes but rather our microbiome. It should also be pointed out that as we congratulate ourselves for taking over the planet due to agriculture and combustion engines, we are living through the sixth mass extinction, suggesting that in fact we are not acting in our long-term best interests, that like cancer, ecocide might not hurt enough until it's too late, and that intelligence is an evolutionary dead end.
 There is a spectrum of arbitrariness, of how "symbolic" the perceived stimulus is relative to the thing being perceived. In the engineering of signal systems, the closer your signal is in a physical causal chain to the stimulus - the thing it is signaling about, or measuring - the less arbitrary it is. Digital systems are more powerful in many ways than analog systems but they accept increased arbitrariness and complexity in exchange for this. Case in point: people who fear assassination can build elaborate electronic sensing systems to avoid being approached while they sleep, but there are always questions: can they be hacked? What if a component fails? What if the power is out? Can a spy shut it down? Compare this to the system used by the Tokugawa shoguns - sleeping in the middle of of a large room with a wooden squeaky floor with many tatami barriers, and choosing a place to sleep on that floor at random each night.
 Of course other things have changed about the way science (and medicine) are practice over the past half-century, not to mention that all the "low-hanging fruit" in terms of problems accessible to the specific strengths of human cognition may have been picked soon after the Enlightenment started. But it remains a concern that by (not unreasonable) trying to regularize and make transparent the application and career progression process, we're selecting for attributes that have little to do with being a successful scientist or physician, or even select against them, because we're using "artificial" endpoints distant from the relevant abilities, which can be and are gamed. Certainly this problem is not unique only to science and medicine, and whatever is causing the phenomenon, it's having real-world economic consequences. An interesting historical study would be to see if the health of the Chinese government across the dynasties waxed and waned with any relation to some aspect of civil service examinations.
 Referred visceral pain is an example of an aversive stimuli-sensing system that gives very inexact answers, because it's not important enough to improve. If your arm is hurt, you can point to exactly where even with your eyes closed. But when people get appendicitis, very often in the early phase they point to the center of their abdomen around their belly button, and then gradually the pain moves to the area immediately over the appendix - but only after the overlying tissue, which is innvervated by somatic ("outside"-type nerves) is irritated. Often people with a problem in their abdominal organs or even their heart feel extremely sick and anxious and in general uncomfortable but can't point to any specific spot. Why does this make sense? If a scorpion scrambles up onto your left elbow is stinging you there, it's worth knowing exactly where the stinging is happening so you can act in a way that improves the situation. But if you were sitting around a fire with your tribe in the African Rift Valley 100,000 years ago with appendicitis, what exactly could you do about it? If you had bad stomach pain, it didn't matter exactly where it was; you curled up in a ball where your family members were nearby to care for and protect you and hoped it passed.
 In contrast to corporations, single-celled organisms survive best not when they have a high surface-area-to-volume ratio (like successful nimble corporations) but a low one, which is why they are mostly near-spherical. Corporations, while competing with each other and in some ways with their customers, are still operating in an environment that is predominantly cooperative, so it's better to have lots of customer interaction surface. Bacteria exist in an environment of constant unpredictable ruthless lawless natural selection. It's really about how much the surface is an asset for information gathering by the system, versus a liability to attack from competing systems. Consequently, for bacteria, the sacrifice of knowing less about the outside world (which at that scale is less predictable than our world anyway) must be worth it given the overall survival advantage gained by being in the shape that most maximizes distance of any unit volume from the surface. In contrast, there are cells in biology that maximize surface: neurons, and nutrient absorption membranes deep in the GI tract. Both of these exist deep in the organism (especially neurons) in a web of profound cooperation (also especially neurons.) The more fractal a complex dynamic system, the more likely it is to exist in an environment of predominant cooperation. The more spherical a complex dynamic system, the more likely it is to exist in an environment of predominant competition. In the case of corporations, the shape is somewhat "virtual", but corresponds to points of contact per customer and ease of contact, which ultimately are still going to require space. Nations are somewhere in the middle, though it would be interesting to see if nations now, more cooperative and less violent than they historically were, are more likely to have fractal borders, or shared zones (my predictions) than one or two centuries ago.
This corresponds to Level 3 operations as described here, which explain how large organizations work but is not an argument that they should work that way, for Level 3 organizational decisions often lead to the downfall of the organization, unless the inner circle in the super-Pareto distribution has the best interests of the organization at heart. In politics this happens either because they have a feedback loop and are beholden to an informed electorate as in functioning democracies, or less likely, by luck as with benevolent dictators, e.g. Lee Kuan Yew.
 There's an apparent conflict here. On one hand I'm arguing that systems become distorted because they're focused on self-perpetuation. On the other I'm arguing that they focus on the metrics, which they game
 Organisms are exempt from becoming "rackets." Rackets are systems which have a claimed mission besides their self-perpetuation but in fact are only self-perpetuating, and organisms are openly survival systems, full stop, and make no claims to the contrary.
It may not have escaped your notice that one implied solution - expand the system until it comprises the whole universe, and there is no self/non-self boundary is, at least on the individual level, one advocated by many mystical traditions. We actually achieve this when we die, so in individual terms this could be re-formulated as "lose your fear of death". Yet our read-only hardware makes this a terrifying and unpleasant experience, even, empirically speaking, for life-long meditators. For now, this is not a real solution.
APPENDIX: Analogous Terms
|Input||stimulus||perception, belief, representation||measurement, metric, dogma|
|Reflection, Output, Reaction||response||belief, behavior||decision|
|Examples of Gaming||Rare; some higher animals seek out "highs"||Opioids, denial, delusion||Preserving letter but not spirit|
|Examples of crisis||Death||Personal disillusionment or death||Revolution or collapse|