For the most recent alternate history thought experiment, see Ancient East Indian Settlement of Australia.
As a standard refrain: it's depressing that almost all alternate history is about some episode of organized violence ending differently. It would be much better to instead focus on counterfactuals about the spread of technological and cultural innovations. What would a Tang or Byzantine industrial revolution look like? What about North America where they never forgot about ironworking around the pre-Columbian Great Lakes? But the next best thing would be to take a war or conquest out of history altogether.
So what if, instead of gory scenes of Asian steppe horseman dragging the Pope to the Tiber, wrapped in a carpet, we take the Mongols out of history completely? First, it's always worth pointing out: both China and Russia were ruled by the Mongols, the last nomadic conquerors, for the better part of two centuries, and they're both still here, though obviously different for it. Baghdad, the heart of Islam, was sacked horrifically, and that religion is hardly obscure today.
It's counterintuitive to talk about the benefits of the Mongol conquests, but they arguably created an advantage to Europe and contemporary Christendom. Best understood, after the Mongol conquest, the Silk Road was safer, facilitating diffusion of goods and technological innovation that frequently originated in the Middle East and China; at the same time, those competitors were weakened and development slowed by the conquest. It's not implausible that the dynasty that followed the Song would have been more advanced than the Yuan, and less inward-focused, as the Ming was in our history. The treasure fleets would likely have started sailing earlier, and Europe (with less technology and less experience with trade) later. A fifteenth century industrial revolution in China is not an absurd notion. At the same time, the cities and staes of Central Asia would have continued flourishing, and Islam would not have had the violent psychological collapse and loss of its cultural center of gravity that the sack of Baghdad effected.
In Europe, we might see multiple states in what is now Russia and Ukraine around Kiev, Pskov, Novgord, and Moscow, though if there was consolidation to a single state, more likely it would have been Kiev than Moscow, without the Mongols appointing the Muscovites as their tax collectors. More importantly though - in our history, Constantinople was explicitly an ally of the Mongols, although in the end, the Mongols cursed Constantinople to fall sooner - by creating a power vacuum in Anatolia that was not truly filled until the Ottomans appeared. Without the Mongols creating anarchy, Constantinople would have survived longer. This would actually have been bad for Europe. when Constantinople fell, the Ottomans then controlled the Silk Road and cut off Silk Road trade - which thanks to the Mongols, European merchants had been using to their benefit. It was in the decades following the fall of Constantinople that European merchants sought ways to replace this route, and they were so desperate that they started doing things like sailing around Africa, or even in desperation around the back of the Earth so they could (in theory) approach China from the ocean to its east. Finally, in our history, cannons became important by the mid-1300s in both Europe and China. With the uninterrupted development of China's economy (and a possible early industrial revolution), and a slowed diffusion of technology back to Europe from a more dangerous Silk Road, they would have developed cannons earlier than Europe. An Asian great divergence, in terms of both technology and colonization of the New World, becomes much more likely.
You could argue that it was the geography of Europe that predisposed it to diverge from China in the early modern age, and I think that is true as well; but even so, Europe was only ahead of Asia by a one or two centuries. That's why, by smashing most of Asia but turning around at Hungary and Poland, the Mongols did a great favor for future Europe.
Sunday, December 21, 2025
Monday, October 20, 2025
We Make Numbers From Shapes: Hints from Neurology
As a standard disclaimer, I am not a mathematician, and you should take these as observations rather than arguments. (For example, the amount of time I would have to invest to understand even this discussion is large.) But these observations add up to an assertion that, rather than numbers and there being an abstract relationship between two separate domains, numbers are a type of geometrical entity, though they are arbitrary and do not exist separate from our nervous systems, much like pain or non-spectral colors like pink. As in that argument, again observations from neurology are central.
In Gerstmann syndrome, patients who have had a dominant (usually left) parietal lobe injury, often from a stroke, lose the ability both to distinguish left and right, and to do arithmetic. (They also lose the ability to distinguish individual fingers, suggesting how important they are for counting. Children born blind often begin counting on their fingers without being taught.) It's also notable that almost all blind mathematicians are geometers.
Related, I've long been mystified why in OCD, among the more abstract obsessions patients develop, are not just symmetry, but counting, and if you have one, you're more likely to have the other than random chance would predict. In terms of evolutionary psychology, many psychiatric illnesses are easily understood as hyperactive subsystems that in their normal setting and function, would be quite adaptive. In paleolithic Africa, you need anxiety to survive, you should be afraid of heights and snakes - and you should be able to frequently and easily orient yourself. Consequently, symmetry being part of OCD makes sense. But why counting, or especially fixation or avoidance with certain numbers? Some people insist on doing things in multiples of 3 or 5, or must avoid certain numbers (7 is frequent, possibly because it's the highest single digit prime?) to the point that it impacts their life, despite knowing how irrational it is - a smart-aleck roommate calls them to say they turned their TV volume to 7 and this is so uncomfortable to the OCD patient that she has to drive home from work to adjust it even at the risk of getting fired. It's hard to explain why the number 5 (or any number) would be good, or 7 would be bad, in the Pleistocene - unless it's actually the symmetry system that's hyperactive here. It's obviously suggestive that when people with number-avoidance describe why they don't like certain numbers, they often describe them as "spiny" or "lopsided" or in other spatial terms.
Unsurprisingly, the same brain region (the inferior parietal sulcus, Brodmann area 7) is involved in both symmetry and counting. Therefore, that a lesion to this area would result in deficits in both these abilities is completely predictable; but why would the brain have evolved so the same area was serving both purposes? As before, even proving that numbers are a geometric entity in their representations in our brains does not give us a generalizeable argument about what numbers are, and it's possible that even if this is true for humans, it's only provincially true for the way our nervous systems work - but when AI systems develop the same way without being directly instructed, it does increase our confidence that our nervous systems and the AIs are converging in the way they represent something because that's really what they objectively are. On that note, it was discovered that what one AI was actually doing, when it is doing arithmetic, was spontaneously turning a modeled shape and counting vertices. It "evolved" this system on its own without being instructed.
In Gerstmann syndrome, patients who have had a dominant (usually left) parietal lobe injury, often from a stroke, lose the ability both to distinguish left and right, and to do arithmetic. (They also lose the ability to distinguish individual fingers, suggesting how important they are for counting. Children born blind often begin counting on their fingers without being taught.) It's also notable that almost all blind mathematicians are geometers.
Related, I've long been mystified why in OCD, among the more abstract obsessions patients develop, are not just symmetry, but counting, and if you have one, you're more likely to have the other than random chance would predict. In terms of evolutionary psychology, many psychiatric illnesses are easily understood as hyperactive subsystems that in their normal setting and function, would be quite adaptive. In paleolithic Africa, you need anxiety to survive, you should be afraid of heights and snakes - and you should be able to frequently and easily orient yourself. Consequently, symmetry being part of OCD makes sense. But why counting, or especially fixation or avoidance with certain numbers? Some people insist on doing things in multiples of 3 or 5, or must avoid certain numbers (7 is frequent, possibly because it's the highest single digit prime?) to the point that it impacts their life, despite knowing how irrational it is - a smart-aleck roommate calls them to say they turned their TV volume to 7 and this is so uncomfortable to the OCD patient that she has to drive home from work to adjust it even at the risk of getting fired. It's hard to explain why the number 5 (or any number) would be good, or 7 would be bad, in the Pleistocene - unless it's actually the symmetry system that's hyperactive here. It's obviously suggestive that when people with number-avoidance describe why they don't like certain numbers, they often describe them as "spiny" or "lopsided" or in other spatial terms.
Unsurprisingly, the same brain region (the inferior parietal sulcus, Brodmann area 7) is involved in both symmetry and counting. Therefore, that a lesion to this area would result in deficits in both these abilities is completely predictable; but why would the brain have evolved so the same area was serving both purposes? As before, even proving that numbers are a geometric entity in their representations in our brains does not give us a generalizeable argument about what numbers are, and it's possible that even if this is true for humans, it's only provincially true for the way our nervous systems work - but when AI systems develop the same way without being directly instructed, it does increase our confidence that our nervous systems and the AIs are converging in the way they represent something because that's really what they objectively are. On that note, it was discovered that what one AI was actually doing, when it is doing arithmetic, was spontaneously turning a modeled shape and counting vertices. It "evolved" this system on its own without being instructed.
Labels:
cognition,
mathematics,
philosophy
Friday, October 3, 2025
Trump is Our First Barracks Emperor: America's Crisis of the Third Century
States pass through lifecycles of 200 to 250 years. If this statement is true, then to isolate the process from other variables we should look for a a part of the world with a large fertile plain and a single ethnicity, and if left to its own devices its dynasties would last 200 to 250 years. This is in fact what we see in China, the best known example, but the pattern has been noted throughout history and throughout the world. This 200-250 year period is of obvious interest to Americans in 2025.
One mechanism for the transition from one state to the next is that the state's elites lose control of the succession process, and in this respect, history does often rhyme. Roman history provides good examples. The Roman Empire is usually divided into the Principate period lasting from the Battle of Actium in 31 BC, to the assassination of Severus Alexander by Maximinus Thrax in AD 235 that ushered in the Crisis of the Third Century. This was followed by the Dominate period, starting from Diocletian's administration in 284 to the fall of the Western Empire in 476.
Of these two transitions, the fall of the Western empire is better known. In AD 476, Odoacer interrupted the Western succession, or rather, he disintermediated it. As with most things in history, to contemporaries, things are either not actually so sudden, or they only seem sudden to isolated elites in denial due to normalcy bias. In this case, for a century at least, the Western emperor had largely been a figurehead for the Germanic magistri militum who held military power. When Odoacer killed the magister militum Orestes and deposed the last emperor (who was actually the magister's son), he was merely (finally) cutting out the middle man. At the time, contemporaries did not think much of the development, so naturally did it flow from the circumstances, and only later did historians begin to mark it as a major transition. But it was still the final removal of succession - even if the office was mostly ceremonial - from elite oversight, that is, from the generals who had been the de facto power in Rome during much of the Dominate.
Maximinus Thrax says Senators are cucks. "Just kill...I don't even wait." From imperiumromanum.pl
The Crisis of the Third Century is much more interesting. There were four dynasties during the Principate, but even when each of these ended, there were brief periods of at most a few years before stability returned. During the Principate, there was a tradition (if not what we would recognize as a formal institution) of future emperors being made the adopted sons of previous emperors, and then being chosen by the Senate. Earlier in the Principate dynasties, Senators frequently had military experience - they retained their connections with the legions and hence, the military's respect. But the trend during the Principate was toward less and less military experience among the elite families that produced Senators, and already by the Antonine dynasty, the military experience and connections of Senators had decreased dramatically. At its base, civilization is always backed by a threat of violence, but the more distant that threat is, the more pleasant our lives are and the more successful we could say the civilization is.
But by the time of the Severan dynasty, the emperors focused only on keeping the army happy (Alfoldy 1974), and when one side (the military) can use violence with no real threat of retaliation (from the Senate), there is clearly an unstable equilibrium. Thus, when Roman soliders assassinated the last Principate emperor Severus Alexander in 235, and then the soldiers of the legion rather than the Senate decided Maximinus Thrax was emperor, we see another example of elites losing the power to control succession. Maximinus never even came to Rome when he was "emperor" (read: general who threatened to kill others and avoided being killed for long enough for contemporaries to bother writing down his name.) Maximinus was, during his brief "reign", openly adversarial with the Senate, who rightly feared him when he did approach. This way of becoming emperor - a "barracks emperor" who had little interest in affairs in Rome or ability to govern the empire - continued until Diocletian's power-sharing arrangement in 284. The Crisis had begun.
The succession process in the United States would seem to bear little resemblance to that of the Romans. Indeed, our regular elections were set up partly with their example in mind as a warning. Though we've had leaders popular for their military record, it's not the elites’ changing relationship with the military that makes the analogy here. We choose our leaders through direct election by citizens, and since very early in the republic our choice has been constrained by two elite-controlled organizations called political parties presenting us with their candidates. It's difficult for most Americans to admit this, but this process (and until the early 2000s, the mass media oligopoly that supported it) is how the elites controlled succession - and it's even harder for us to admit that this may have been for the best. While the franchise has expanded over time, it's not obvious that each step in turn immediately changed the kinds of choices the parties were offering for national leader. Starting around 2010 the tone of politics and kind of candidates offered had, however, seemingly undergone a rapid shift, maybe not coincidentally now that everyone had a smartphone and social media.
This is how the elites have lost control of the succession process - in our case not by losing respect of and control over the military, but by a colluding media and party elite losing control of the information ecosystem and candidate selection process. The analogy here is that Trump is Maximinus Thrax, the first outsider who rose to power using a new process to circumvent the old elite-controlled pathway to leadership, with assassinations of character rather than physical bodies. This is why conventional candidates have been roadkill, like clueless, ossified Roman Senators during the Crisis, who might have continued maneuvering among their colleagues in Rome like their forebears did, not understanding how irrelevant such efforts had become - while generals far from Rome tried to get into position to assassinate the current one, wherever he might be encamped. If Gavin Newsom succeeds in turning Trump's social media appeal-to-the-masses approach against him, he will just be the next in a succession of social media barracks emperors - and the elites then will have lost control of the succession process, and the Crisis of the Twenty-First Century will have begun.
Alföldy, G. "The Crisis of the Third Century as Seen by Contemporaries." Greek, Roman, and Byzantine Studies 15 (1974): 89–111.
One mechanism for the transition from one state to the next is that the state's elites lose control of the succession process, and in this respect, history does often rhyme. Roman history provides good examples. The Roman Empire is usually divided into the Principate period lasting from the Battle of Actium in 31 BC, to the assassination of Severus Alexander by Maximinus Thrax in AD 235 that ushered in the Crisis of the Third Century. This was followed by the Dominate period, starting from Diocletian's administration in 284 to the fall of the Western Empire in 476.
Of these two transitions, the fall of the Western empire is better known. In AD 476, Odoacer interrupted the Western succession, or rather, he disintermediated it. As with most things in history, to contemporaries, things are either not actually so sudden, or they only seem sudden to isolated elites in denial due to normalcy bias. In this case, for a century at least, the Western emperor had largely been a figurehead for the Germanic magistri militum who held military power. When Odoacer killed the magister militum Orestes and deposed the last emperor (who was actually the magister's son), he was merely (finally) cutting out the middle man. At the time, contemporaries did not think much of the development, so naturally did it flow from the circumstances, and only later did historians begin to mark it as a major transition. But it was still the final removal of succession - even if the office was mostly ceremonial - from elite oversight, that is, from the generals who had been the de facto power in Rome during much of the Dominate.
Maximinus Thrax says Senators are cucks. "Just kill...I don't even wait." From imperiumromanum.pl
The Crisis of the Third Century is much more interesting. There were four dynasties during the Principate, but even when each of these ended, there were brief periods of at most a few years before stability returned. During the Principate, there was a tradition (if not what we would recognize as a formal institution) of future emperors being made the adopted sons of previous emperors, and then being chosen by the Senate. Earlier in the Principate dynasties, Senators frequently had military experience - they retained their connections with the legions and hence, the military's respect. But the trend during the Principate was toward less and less military experience among the elite families that produced Senators, and already by the Antonine dynasty, the military experience and connections of Senators had decreased dramatically. At its base, civilization is always backed by a threat of violence, but the more distant that threat is, the more pleasant our lives are and the more successful we could say the civilization is.
But by the time of the Severan dynasty, the emperors focused only on keeping the army happy (Alfoldy 1974), and when one side (the military) can use violence with no real threat of retaliation (from the Senate), there is clearly an unstable equilibrium. Thus, when Roman soliders assassinated the last Principate emperor Severus Alexander in 235, and then the soldiers of the legion rather than the Senate decided Maximinus Thrax was emperor, we see another example of elites losing the power to control succession. Maximinus never even came to Rome when he was "emperor" (read: general who threatened to kill others and avoided being killed for long enough for contemporaries to bother writing down his name.) Maximinus was, during his brief "reign", openly adversarial with the Senate, who rightly feared him when he did approach. This way of becoming emperor - a "barracks emperor" who had little interest in affairs in Rome or ability to govern the empire - continued until Diocletian's power-sharing arrangement in 284. The Crisis had begun.
The succession process in the United States would seem to bear little resemblance to that of the Romans. Indeed, our regular elections were set up partly with their example in mind as a warning. Though we've had leaders popular for their military record, it's not the elites’ changing relationship with the military that makes the analogy here. We choose our leaders through direct election by citizens, and since very early in the republic our choice has been constrained by two elite-controlled organizations called political parties presenting us with their candidates. It's difficult for most Americans to admit this, but this process (and until the early 2000s, the mass media oligopoly that supported it) is how the elites controlled succession - and it's even harder for us to admit that this may have been for the best. While the franchise has expanded over time, it's not obvious that each step in turn immediately changed the kinds of choices the parties were offering for national leader. Starting around 2010 the tone of politics and kind of candidates offered had, however, seemingly undergone a rapid shift, maybe not coincidentally now that everyone had a smartphone and social media.
This is how the elites have lost control of the succession process - in our case not by losing respect of and control over the military, but by a colluding media and party elite losing control of the information ecosystem and candidate selection process. The analogy here is that Trump is Maximinus Thrax, the first outsider who rose to power using a new process to circumvent the old elite-controlled pathway to leadership, with assassinations of character rather than physical bodies. This is why conventional candidates have been roadkill, like clueless, ossified Roman Senators during the Crisis, who might have continued maneuvering among their colleagues in Rome like their forebears did, not understanding how irrelevant such efforts had become - while generals far from Rome tried to get into position to assassinate the current one, wherever he might be encamped. If Gavin Newsom succeeds in turning Trump's social media appeal-to-the-masses approach against him, he will just be the next in a succession of social media barracks emperors - and the elites then will have lost control of the succession process, and the Crisis of the Twenty-First Century will have begun.
Alföldy, G. "The Crisis of the Third Century as Seen by Contemporaries." Greek, Roman, and Byzantine Studies 15 (1974): 89–111.
Sunday, May 18, 2025
Protect Your Slack, Delay Moloch: Why You SHOULD Defend Yourself With Artificial Rents
Inspired by Behold the Pale Child at Secretorum: "the arc of the universe is long, but it bends towards Bakkalon." (or Moloch. Moloch, at the bottom of the Darwinian/economic/political gravity well!)
The point of life is to be happy. How to go about this is mostly biologically determined. Yes, it's good to make others happy if you can, and to have making others happy make you happy as an incentive; for most of us, as social animals, this is also biologically determined. This position is that of a modern-day Epicurus, enhanced with and connected to facts about the natural world and our place in it. Not very controversial, you might think.
But I suspect that many people in the rationalist blogosphere will find it incredibly selfish to think first and foremost of oneself and ones own happiness, instead of the utilatarian (more specifically, Parfitian) long and wide view of everyone's happiness. (This more "selfish" position is not necessarily just individual hedonism, but rather would include having birthday parties for your kids instead of donating that money to dig a well in the developing world.[1]) In a curiously Calvinist-adjacent take, the implied position of the EA world (and tech capitalists telling young people their identify comes from working unhealthy hours and sacrificing the rest of their lives) is that you should de-emphasize your own happiness here and now since it's such a small drop in the ocean of possible conscious experience throughout time.
And yet if people are biologically limited by the link between their happiness and the amount of work they can do - and the kind they can do - and for whom they do it - and they are - what you're asking is many people to sacrifice their own happiness for an uncertain outcome, for an uncertain philosophical position.
The position of working 110% all-out all the time is not just something from the world of Effective Altruism (EA), etc. In a recent post on the Slatestarcodex (SSC) subreddit, in a discussion about the Musk-like approach to constantly fastforwarding everything and having work be eternal crunchtime - a commenter stated that once in a great while, such a push was okay, but it's not sustainable. I would go one step further: I want to enjoy my life, working hard diminishes that, focusing on any one thing to the exclusion of most others diminishes that, and you should avoid crunchtime and working hard wherever possible. (That is, I value slack - see Church of the Subgenius - and I will defend that slack if necessary, even if I have to do it surreptitiously.) Wanting to enjoy your life, and do more things you directly enjoy more often, and fewer instrumental things, is not something to be ashamed of. That's why I'm posting it online and telling you it's good to feel the same way.[2]
It's true that if everyone thought this way, then life-saving and -improving technologies would progress much more slowly. But herein I'm taking the (apparently very hard to grasp position) that I neither want to work that hard, nor do I want to get in the way of people who do want to work hard. I say in all seriousness: good for them, I'm glad we have people built this way![3] But don't feel bad if you're not one of them, and you're almost certainly not. I'm definitely not, and I feel great about it! I've even turned down promotions for this reason. Again, not controversial, I wouldn't think. But it feels very much like an emperor's new clothes position to take.
The opposite of slack is hypercompetition, which I don't have to further describe to anyone living in the developed world in the 21st century, and I would argue a big part of Moloch is hypercompetitiveness (Moloch in the sense of Scott Alexander's synechdoche for a self-perpetuating system with serious and unintentional consequences that benefit nobody.) There is only so much work you can do; you need some slack, and though our modern Molochian culture has trained us to hide our slack-seeking from ourselves, we do it, or we burn out. And part of the outrage at people finding ways to guard slack is a result of hiding our desire for slack from ourselves (read: reaction formation and the predictable reaction to seeing others fulfill their fantasy) when a source of slack protection is noticed. (See: "tears of rent-seekers" regarding taxis, academia, government, or any other area where people have goodness forbid given themselves some extra slack to help them enjoy their lives.)
Other strategies: shrouding - which normally means companies trying to avoid competing on price by making their pricing opaque, but works in the labor market too when workers cooperate to obscure measurement of output - outlawing payment for piece work was a major victory. Another: avoidance of direct market exposure, or any situation where your livelihood rests on your having to react in real time to developments - usually the more layers within an organization away from a customer interaction surface or competition with other organizations, the quieter your life is. (Must be balanced against the risk of paroxysmal collapses; the cycle-time of this class of org is relevant to your choices here (nations = centuries, companies = years or decades if already long-established.)
Some concrete examples are in order, of how you can and should protect slack and benefit your life by erecting artificial rent structures.
SITUATION 1 You're the leader of Organization A. You believe in what the organization is doing, genuinely care about the people there and want them to have good lives, and as a result you "leave some money on the table" by not expecting them to work that hard or otherwise sacrifice their well-being to the organization, as long as they keep the wheels turning.
Then Organization X comes along (for the Parfit-style calculators out there, let's say it has the same number of people), which does NOT care about its people this way, and they are constantly sacrificing themselves, or at least on a sort of psychological Malthusian frontier (of stress rather than starvation.) This might well be an Elon Musk company. Organization X eats Organization A's lunch, and Organization A is destroyed or absorbed, along with the lifestyle of the people in it.
SITUATION 2 Same as above, but you're the leader of Organization B. You know it is likely that if you do NOT drive your people to self-sacrifice, then a Muskite will drive theirs in such a way, and then they'll come for you. So for your organization to continue existing, you have to work them to the point of self-sacrifice. You do this, and keep existing, but the people who work at your organization are now miserable.
SITUATION 3 You're the leader of Organization A, same as Situation 1. Except you have a plan. You want your employees to have a good life but you know that the Muskite misery engines out there like Organization X will come get you. So you make a couple calls to a governor or legislator, take them golfing and make some arrangements, etc. Organization X now finds you have an administrative or legal moat - an artificial rent protector - for example, to do what your org does they have to be in a certain consortium and no one will let the Muskite org join, or Organization X can't operate in a certain business in a certain territory, unless the workers within Org X get lots of protections. You know this can't work forever, but it will work for a while, and benefit the people you care most about. Organization X loses its advantage in being willing to essentially trade personal slack for victory. People on SSC read about this, and cry their eyes out talking about Rents, and how you're immoral for depriving the rest of the world of the fruits of your labors (invisible tragedies, etc.)
I used to join in with the "ha ha, rentiers dying, suck it taxi drivers" until I realized that within a few years, AI will be able to do all of our jobs, and the value of labor will race to zero. Of the strategies I've mentioned, only legal artificial rent structures have any chance of lasting for any length of time. So I'm unashamed to admit I would rather work for Organization A in Situation 3, and unless you're the 1% of the 1% in productivity, you would too. (I hate to be the one to tell you, but if you think you are a 10x 1% of 1% superstar, you are much more likely to be delusional than an actual superstar, and the angrier that statement made you, the more likely you're delusional.) Of course, sometimes the rents come "honestly" from an innovation - but then again, even patent protection is an artifical rent, since it's not just the innovation itself. Mostly rents come "artificially" from barriers like the ones I've described. Taxi medallions, medical licenses, etc. although in most cases there's usually at least some non-bullshit reason for the certification, or guild membership, whatever it is (e.g. it's a quality signal.)
Note that I've written these thought experiments with you in the position of the leader. But you're almost certainly not. If, in a true Rawlsian approach - if you fell out of the sky at random into these thought experiments - you'd probably be a rank-and-file employee. In that spirit:
SITUATION 4 You're an employee (not the leader) of Organization A. You believe in what you do and what the organization stands for. Your leader seems to genuinely want everyone to have good lives and doesn't work anyone too hard. As you smirk and murmur to your colleagues at pool parties, this is because the leader is friends with the governor, and got a law passed artificially protecting you from competition, which is why you have a good income without working too hard.
Then the leader dies or steps down, and a new CEO takes over - one who reads SSC and Marginal Revolution. "Enough with this laziness! Company X has their own lobbyists, and we can't wait for them to get the law repealed and be caught off-guard. 80 hour weeks! No vacation or weekends if you want to be considered serious around here! Constant aggressive deadlines! Do it 10x faster! We're depriving the rest of the world and future generations of the fruits of our labor, how selfish that is, think of all the hidden tradegies! Don't like it? Emigrate/quit and go to our competitor, who will probably have to do the same thing to keep up anyway." Would you say "Yes! Finally, our new leader is high-agency, and this is the moral thing to do instead of collecting rents"? Yeah, sure you would.[4] If you do, you burn out, ruining your health and family life, plus you have no more time to read SSC.
Certainly it's a difficult balance to find, and often you're just surfing a temporary inefficiency wave until that wave breaks and you're back in the same Molochian world as everyone else - but you should try to find it and ride it as long as you can. In the long run, we're all dead anyway. If you can have 5 or 10 more years of slack instead of zero more years, you are not being immoral to take it, and (for the Parfitians in the back) you can't be sure that the only thing you'd do by missing out on the slack is making yourself miserable with no other impact, thus doing the immoral thing of increasing the suffering of the universe on net.
[1] I've noticed that the tech world in general and EA especially is a haven for those who in the abstract, are horrified at the existence of slack (or at least that's the non-revealed preference.) In general consequentialists tend to neglect deontology - the role of duties in what decisions are moral. Consequentialists tend to look for abstract principles for actions to adhere to, but actions are not disembodied principles, they occur in time, and space, and social space - that is, in the context of whatever history and relationships, if any, you have with the people affected. Deontology clears up a lot of the confusion about what to actually do and when to do it, and who to do it with/for. I've also noticed conscientious younger people tend to be consequentialists, and older people season their outlook with more deontology as they age.
[2]Maybe this whole essay is just my own psychotherapy, justifying the following to myself: as a physician, every time I go home at the end of the day or take a day off, I am depriving people of potentially life-saving treatment. Some physicians, more in previous decades than today, kept this in mind and worked ridiculous hours; many modern healthcare organizations are more than happy to take advantage of this mentality of self-sacrifice to make another cent, and then when you start making mistakes because you burn out, they kick you to the curb. Not unique to medicine of course, but I'm very comfortable protecting my time so I can have slack and enjoy my life, and what's more, I limit my responsibilities to my established patients, and not some abstraction of "possible humans in the universe". If you're a naive consequentialist (who doesn't understand deontology or respect the limits set by biology) you've probably dismissed me as Jeffrey Dahmer by this point.
[3]To beat a dead horse: this is not an anti-hard-work screed. If you like to work hard, focus on one thing and one thing only, you find it rewarding, great! Part of civilization's success is that we've set up a system that rewards you, and where the rest of us also by diffusion get the benefit of the wealth and technologies you create. But if your choices start taking away my slack - I'll ask my guild to take our Congressman golfing, after which an artificial moat may mysteriously appear. For a relevant culture-wide take on the same: I once read an account of an American traveler in Japan who said it's great to be a foreigner in Japan - because it's a safe, clean, beautiful, quiet place, due to the crushing social obligations of Japanese culture that keep it this way, and as a foreigner you can free ride on this. But you obviously shouldn't do anything to make it harder to keep the country that way!
[4] SSC surveys have consistently shown that oldest siblings are more likely to be readers. Though it's a stretch, it does make me wonder if an oldest-sibling-rich group concerned about these topics might tend to lack a healthy level of resource anxiety (no older siblings to finish all the dessert before you, hog the TV or soccer ball, etc.) This would lead them to always assume that protecting slack can only be about stupidity or laziness - "Aw, we ALWAYS have to stay on the little kid playground because of them!"
Labels:
advice,
economics,
psychology,
rationality
Monday, May 12, 2025
On the Good of Young Men Having Their Asses Kicked
I recently visited a martial arts school for kids, and was immediately impressed by - something. It took me a minute to put my finger on what I liked about the place. It was that they were serious, and firm. The instructors wanted these kids to get better, and they didn't need to crack a joke every minute to diffuse tension, or even be especially kind about criticizing someone's technique. And the kids responded well to it, and were focused, and improving. I found myself wishing to see more of this approach, and then wondering why.
Young men having their asses kicked by superiors genuinely interested in the improvement of those young men, is an individual and social good. I express my concern and record my defense herein because I think many young men today should have their asses kicked more. If you're a young man reading this, know that I was once a young man; also, that I should definitely have had my ass kicked more. Below I define ass-kicking, and explain why I believe this.
By "ass kicking" I don't mean physically, and I also don't mean pointless abuse. What I do mean is this - in second person to help you imagine and identify with it:
Why is ass-kicking a good thing? And why am I focusing on young men?
Why am I specifiying young men? Let's broadly define "young" as 13-30. After this developmental window, it is very difficult to change identity and personality in the way that ass-kicking does, and in particular to obtain the benefits such experiences can produce. And I find that it's usually men who have a personality structure and defenses that most benefit from such experiences. A young man's psychological defenses involve a good deal of narcissism about how tough, strong, and awesome he is. When encountering situations suggesting otherwise, he rationalizes, avoids, or attacks. If anyone tells him he's not the greatest thing since sliced bread, he denigrates and/or retaliates and/or disengages. But when it's his superior (his supervisor in a job he wants to advance in) or drill sergeant doing it and he can't rationalize avoid or attack, he has three choices: a) fail b) be miserable because he can never understand that they're not just abusing him personally or c) he "gets it" and grows up and improves, not just in specific skills but in overall character.
It is my suspicion that, not only is ass-kicking happening less often, but also that option c) is being delayed in men's lives and more often happening during romantic relationships; and romantic partners are not enjoying the expansion of their near-parentified duties. Of course it's not only men who can ever benefit from ass-kicking, and certainly not all men will benefit from ass-kicking based on their constitution, but in my empirical observation, in general young men benefit most from ass-kicking.
Why is ass-kicking good? Beyond (obviously) the specific skills and professional identities that are being quickly learned and grown, the general benefits come down to three factors.
A, B, and C correspond basically to "I have a long way to go to be a badass, it's okay that I have a long way to go but it's up to me to improve and I can improve, and while it's not fun now, I recognize that my superior did me a favor and that they're in the position where they are for a reason so I will respect and defend them to others." It adds up to the cliche character-building as well as dealing with adversity, being able to function in authority structures and understanding the basis for legitimate authority, i.e. that authority is not synonymous with force. In terms of Kegan and Chapman's hierarchy, ass-kicking is a maturing process that helps young men graduate from level 3 into level 4, and failure to do so has predictable consequences for broader society (see last paragraph.)
To be clear, nothing herein should be taken as justifying abuse. In fact, I think outlining the characteristics of ass-kicking helps us draw a distinction between ass-kicking and mere abuse. And even when an ass-kicking superior intends the ass-kicking constructively to improve the ass-kickee, if the ass-kickee can't tolerate it, they should be able to quit (withdraw consent.) Abuse is non-consensual, and is about pleasing a sadistic abuser, rather than (in the long run) helping the recipient. Those of us nodding along with this essay and agreeing that ass-kicking is a good thing and was a good thing for us specifically, are usually still able to look back and distinguish between a hardass who you maybe even hated at that time but for whom in retrospect you feel gratitude and respect - versus a bully with an anger problem. (Of course abusers try to trick us sometimes by pretending to be ass-kickers.) Many readers will by this point be thinking of Sergeant Hartmann from Full Metal Jacket (note these links are NSFW and contains slurs) - he is hard but he is fair, directly states you will not like him, but he is trying to help his recruits and he tells them so. He is clearly pleased when they improve. He is an ass-kicker. In contract, Alec Baldwin's character in Glengarry Glen Ross is just an abusive bully, and the ages of some of the men in the meeting suggest they are beyond the useful ass-kicking window anyway. He explicitly tells them he doesn't care about them, and just wants numbers for the company, figure out on your own how to do it or hit the bricks pal. Without Good Result A above, young men are more likely to keep thinking everyone who tells them something they don't want to hear is just another Alec Baldwin humiliating them.
Why did I write this?
It's my impression that opportunities for ass-kicking have decreased over the past half-century or so, at least in my country, the U.S. Why? I suspect it's a combination of our decreasing intolerance of direct-speaking authority figures, and constant consumer messaging: that you are special, you are the best, you should never be uncomfortable, don't listen to people who make you feel that way. Those two reasons may or may not in fact be the same thing. (I intentionally use "impression" and "suspect" not as weasel words but as clear signals of how you might weight these claims.) Therefore, as young men's opportunities for ass-kicking decrease, I predict America will face a worsening epidemic of narcissistic, oversensitive, immature, and adversity-intolerant men, who blame everyone else for e.g. why they couldn't finish college or hold down a job, and who can't tell the difference between bullies and legitimate authority. I leave it to the reader to decide if this trend is already visible.
Young men having their asses kicked by superiors genuinely interested in the improvement of those young men, is an individual and social good. I express my concern and record my defense herein because I think many young men today should have their asses kicked more. If you're a young man reading this, know that I was once a young man; also, that I should definitely have had my ass kicked more. Below I define ass-kicking, and explain why I believe this.
By "ass kicking" I don't mean physically, and I also don't mean pointless abuse. What I do mean is this - in second person to help you imagine and identify with it:
- there is a person with higher status than you
- they are training you and/or managing you, and they provide intense, frequent negative verbal feedback and potential consequences for underperformance...
- for reasons in your best interest (this is critical)
- who you won't avoid - because you recognize that tolerating their very negative feedback will help you improve as a person, at specific skills, and achieve your goals.
- "Higher status" means the person has objective, measurable achievements that place them unambiguously above you - money, artistic production, athletics, climbing some ladder - that you are also in. If you're trying to be a better electrician, you don't care if an investment banker or mountain climber gives you critical feedback.
- This intense, frequent, negative feedback is unpleasant for many reasons, among them that it concerns something you care very much about - some ability or position that you have chosen as part of your identity. The unfortunate paradox is that meaningful negative feedback hurts, and it has to hurt at least a little, if you actually care about the thing you're getting feedback about.
- The person is actually trying to help you improve, often to very high standards - this is why it's not abuse - but their concern in helping you improve takes precedence over hurt feelings. Hurt feelings take time and attention to avoid, so by virtue of your superior not having to consider them, you improve faster. What's more, during ass-kicking, the atmosphere is serious. There is no tension release mechanism other than improving your performance. (As an aside, the ass-kickee often attempts humor is in these situations, to his detriment.)
- You choose not to avoid the unpleasantness because you know this experience is in your own best interest, and therefore despite its unpleasantness, you choose to carry on; or you're in a setting you can't leave (e.g. the military) but fortunately your superior is trying to improve you rather than just abuse you.
Why is ass-kicking a good thing? And why am I focusing on young men?
Why am I specifiying young men? Let's broadly define "young" as 13-30. After this developmental window, it is very difficult to change identity and personality in the way that ass-kicking does, and in particular to obtain the benefits such experiences can produce. And I find that it's usually men who have a personality structure and defenses that most benefit from such experiences. A young man's psychological defenses involve a good deal of narcissism about how tough, strong, and awesome he is. When encountering situations suggesting otherwise, he rationalizes, avoids, or attacks. If anyone tells him he's not the greatest thing since sliced bread, he denigrates and/or retaliates and/or disengages. But when it's his superior (his supervisor in a job he wants to advance in) or drill sergeant doing it and he can't rationalize avoid or attack, he has three choices: a) fail b) be miserable because he can never understand that they're not just abusing him personally or c) he "gets it" and grows up and improves, not just in specific skills but in overall character.
It is my suspicion that, not only is ass-kicking happening less often, but also that option c) is being delayed in men's lives and more often happening during romantic relationships; and romantic partners are not enjoying the expansion of their near-parentified duties. Of course it's not only men who can ever benefit from ass-kicking, and certainly not all men will benefit from ass-kicking based on their constitution, but in my empirical observation, in general young men benefit most from ass-kicking.
Why is ass-kicking good? Beyond (obviously) the specific skills and professional identities that are being quickly learned and grown, the general benefits come down to three factors.
- A. We learn to control our negative emotional reactions and decouple them from the person providing the feedback. This is necessary unless you plan to go through life always killing the messenger (which some men certainly try to do.)
- B. We learn to recognize our flaws and shortcomings and tolerate the distress arising from them, and to turn that energy into something positive by working on them instead of being angry about them, denying them, or avoiding them. We also learn that our position in a hierarchy is not the entirety of our worth and identity. (Note, both B and A are really both forms of "tolerating the distress of being at the wrong end of a hierachical disparity." This both makes young men better able to work in groups, and produces empathy which they might otherwise lack, when they are later at the top of such an imbalance, not to mention improving reality-based confidence.)
- C. Not only do we decouple our emotional reactions to the person and the message, we learn to respect the person and recognize that they are helping us, even if it wasn't fun at the time.
A, B, and C correspond basically to "I have a long way to go to be a badass, it's okay that I have a long way to go but it's up to me to improve and I can improve, and while it's not fun now, I recognize that my superior did me a favor and that they're in the position where they are for a reason so I will respect and defend them to others." It adds up to the cliche character-building as well as dealing with adversity, being able to function in authority structures and understanding the basis for legitimate authority, i.e. that authority is not synonymous with force. In terms of Kegan and Chapman's hierarchy, ass-kicking is a maturing process that helps young men graduate from level 3 into level 4, and failure to do so has predictable consequences for broader society (see last paragraph.)
To be clear, nothing herein should be taken as justifying abuse. In fact, I think outlining the characteristics of ass-kicking helps us draw a distinction between ass-kicking and mere abuse. And even when an ass-kicking superior intends the ass-kicking constructively to improve the ass-kickee, if the ass-kickee can't tolerate it, they should be able to quit (withdraw consent.) Abuse is non-consensual, and is about pleasing a sadistic abuser, rather than (in the long run) helping the recipient. Those of us nodding along with this essay and agreeing that ass-kicking is a good thing and was a good thing for us specifically, are usually still able to look back and distinguish between a hardass who you maybe even hated at that time but for whom in retrospect you feel gratitude and respect - versus a bully with an anger problem. (Of course abusers try to trick us sometimes by pretending to be ass-kickers.) Many readers will by this point be thinking of Sergeant Hartmann from Full Metal Jacket (note these links are NSFW and contains slurs) - he is hard but he is fair, directly states you will not like him, but he is trying to help his recruits and he tells them so. He is clearly pleased when they improve. He is an ass-kicker. In contract, Alec Baldwin's character in Glengarry Glen Ross is just an abusive bully, and the ages of some of the men in the meeting suggest they are beyond the useful ass-kicking window anyway. He explicitly tells them he doesn't care about them, and just wants numbers for the company, figure out on your own how to do it or hit the bricks pal. Without Good Result A above, young men are more likely to keep thinking everyone who tells them something they don't want to hear is just another Alec Baldwin humiliating them.
Why did I write this?
It's my impression that opportunities for ass-kicking have decreased over the past half-century or so, at least in my country, the U.S. Why? I suspect it's a combination of our decreasing intolerance of direct-speaking authority figures, and constant consumer messaging: that you are special, you are the best, you should never be uncomfortable, don't listen to people who make you feel that way. Those two reasons may or may not in fact be the same thing. (I intentionally use "impression" and "suspect" not as weasel words but as clear signals of how you might weight these claims.) Therefore, as young men's opportunities for ass-kicking decrease, I predict America will face a worsening epidemic of narcissistic, oversensitive, immature, and adversity-intolerant men, who blame everyone else for e.g. why they couldn't finish college or hold down a job, and who can't tell the difference between bullies and legitimate authority. I leave it to the reader to decide if this trend is already visible.
Monday, February 17, 2025
Integers Are a Useful Fiction, But Math Is Real
Integers have no existence independent of our brains; they are a useful fiction humans have developed, extending on how our nervous system operates, but not pointing to something real about the universe outside of our skulls.
I am not a mathematician or logician, and I concede that this is not a formal proof, but rather a collection of circumstantial observations, pointing toward one conclusion. I may of course not be aware, or not have processed the implications of, work already done in the field that supports or negates any of these arguments or has relevant implications. If you are aware of such, please comment below.
There are several positions on the objective reality of math, "out there", in the objective universe separate from human experience. The tension is this: math seems "real" in that it works, and is useful, and yet, it is very unlike other things in the world. Is it "real" or a human creation? It's commonly taken that there are three positions regarding the reality of math:
While typically these accounts seek to address the reality of mathematical objects generally, here I'm only interested in one. Indeed, in Wigner's words, mathematics is indeed unreasonably effective if fictionalism were true for all of math. I hope this is part of a continuing discussion about the reality of mathematical objects in general, and the implications of mathematical objects being a fundamentally different kind of entity, which I address at the end of the post.
Initially I set out to show that math itself was a provincial tool of human cognition, and shrank my argument to just integers. I do think, based on the observations compiled here, that integers are not a coherent mathematical object in the same sense as other abstract math and logic entities. But I'm not making a claim about other objects, even about transcendental numbers (which is most numbers) - I have no suspicion that they are invented in the same way that I think integers are. It seems to me that if those of us who would call ourselves materialists in the philosophical sense, must recognize that if we believe in both the material world, and the reality of at least some mathematical and logical objects - objects which appear eternal, unchanging outside of space and time - then we are clearly dualists. Obviously this shouldn't sit right with us, which is what motivated my interest. And here I am, left in the surprising position of agreeing with the Platonists, just quibbling over which types of mathematical objects have meaning outside of a human nervous system.
The possible solutions both seem incredible:
I am not a mathematician or logician, and I concede that this is not a formal proof, but rather a collection of circumstantial observations, pointing toward one conclusion. I may of course not be aware, or not have processed the implications of, work already done in the field that supports or negates any of these arguments or has relevant implications. If you are aware of such, please comment below.
There are several positions on the objective reality of math, "out there", in the objective universe separate from human experience. The tension is this: math seems "real" in that it works, and is useful, and yet, it is very unlike other things in the world. Is it "real" or a human creation? It's commonly taken that there are three positions regarding the reality of math:
- Platonism (or realism) – mathematical objects are real but abstract and outside of time and space, and our ability to apprehend them in the manifest world reflects this.
- Nominalism – math is empirically derived from our experience of the world. (This is agnostic to the question of whether abstract mathematical objects exist.)
- Fictionalism (or formalism) – math is a useful trick, employed by entities with specifically-constructed nervous systems; it is ultimately a language game that we use to make sense of the world. Some fictionalists would say this means that math is false, while others say this means it is meaningless, and/or that the property of truth or falsehood does not apply to math.
While typically these accounts seek to address the reality of mathematical objects generally, here I'm only interested in one. Indeed, in Wigner's words, mathematics is indeed unreasonably effective if fictionalism were true for all of math. I hope this is part of a continuing discussion about the reality of mathematical objects in general, and the implications of mathematical objects being a fundamentally different kind of entity, which I address at the end of the post.
- Numbers are derived from the logic of Peano arithmetic. Peano arithmetic assumes the existence of the unit; that is, by positing the successor function, it assumes that the unit exists. The idea of the "fiction of the unit" is at the core of this claim about integers being invented - even in the formal construction of numbers, the unit is just assumed.
- While not all numbers are integers, we do use a system of integers to represent the digits of all numbers, including irrational ones. This means that most numbers cannot even in principal be accurately represented by integers. At least some of the numbers that describe nature like pi and e are transcendental, and cannot even be represented algebraically. It's worth pointing out that the inability to represent transcendental numbers with integers is not merely a trivial artifact of their infinite length - despite not being able to represent pi, the Kolmogorov complexity of pi is very much finite.
- Historically, each time there is an innovation in numbers, it is first rejected as an absurd fiction, then accepted as a useful tool. This is well-illustrated by the history of negative numbers and then imaginary numbers. This is exactly the pattern we should expect if in fact numbers are a useful fiction - except for integers, since our nervous systems had this quirk built in, and we never had to produce them effortfully in a rule-based system; we never had to get past the stage of disbelief.
- Over a century ago, set and number theory encountered difficulties, most famously with Goedel's theorem. - "one cannot prove consistency within any consistent axiomatic system rich enough to include classical arithmetic": which is to say, integers necessitate inconsistency or unproveability. When asked what consequence we should expect in the real world from integers being a convenient fiction, this is the answer - for millennia we used units without trouble but as soon as we attempted to ground them in logic around the turn of the nineteenth to twentieth centuries, we ran into difficulties.
- You might object to my initial claim that if our nervous systems apprehend a thing, it must be real. This is incorrect, or rather it's not clear enough about what "real" means in this context. No, it's not that numbers are a hallucination - even if you hallucinate a dog, this doesn't disprove the existence of real dogs; if you hallucinate an abstract color pattern, this doesn't disprove the existence of colors. Rather, numbers are a kind of thing that exists only as subjective experience. We experience pleasure, pain, and emotions. They are subjectively real, they correspond to events in the world outside our nervous system, but cannot exist separate from a nervous system. Integers are the same category of entities.
- Expanding on instinctive human knowledge of whole numbers - for millennia, humans have universally used numbering systems with units (usually one, two, and many) without any formal grounding. Some animals (including non-primate ones, e.g. crows) can count at least this high. Note that biology having evolved representational tissues (nervous systems) capable of representing mathematical objects is certainly not an argument against the reality of those entities - indeed, if we live in a universe where there are abstract entities unidirectionally causing things in the physical universe, you would expect some of those rules to appear in the brains of replicators that evolve in that universe, who will be imbued with some innate instrumental math sense useful for survival. The way we do this is using neurons which convert the input of continuous reality into a discrete fire/don't fire output. This is why the world seems to have discrete, countable objects when it is actually composed entirely of fuzzy gradients. Therefore, animals do not even need language to delude ourselves into believing in integers - this is where the "conceit" of integers is rooted, not in some philosophical mistake our ancestors made, but in the basic mechanism of how we perceive the world. (To extend integers indefinitely, we do need language.)
- If the claim that integers are an invention of the human brain is correct, then we would predict that intelligent aliens would build a mathematics that did not use integers. Of course there are no aliens to talk to about this, but we have the next best thing: increasingly sophisticated computer systems. In at least two cases, computers have developed mathematics without integers as a primitive concept. In the first case I'm aware of, Wolfram Alpha "rediscovered" math without creating numbers. Later, computer scientists showed how large language models use trigonometry to do addition. There are intriguing hints that even in our nervous systems, integers are not primitive but rather an adaptation of spatial reasoning. Gerstmann syndrome results in a stroke in (typically) the left parietal lobe; symptoms include left-right confusion and loss of arithmetic abilities. In OCD, there can be a sensitivity to both orientation (lines and angles) as well as certain numbers. In second-language speakers, otherwise-fluent speakers often must still resort back to their native language for both numbers and directions.
- I've deliberately left this point at the end of the list as it is the most metaphysical eyebrow-raising. If there is no such thing as a unit, only gradients, then there are no boundaries in the universe, and everything is the same object. This argument has even been applied to subatomic particles (see the one-electron theory.) If everything influences everything else, there are only gradients, and in reality there is only one unity (the universe as a whole) - then the idea of a unit beyond the universe itself is logically inconsistent. The unit of the universe would not actually be a unit either - if the only binary is existence and non-existence, then there is only one thing, and nothing that is not that thing. With no boundaries, it is meaningless to talk about a "unit".
Initially I set out to show that math itself was a provincial tool of human cognition, and shrank my argument to just integers. I do think, based on the observations compiled here, that integers are not a coherent mathematical object in the same sense as other abstract math and logic entities. But I'm not making a claim about other objects, even about transcendental numbers (which is most numbers) - I have no suspicion that they are invented in the same way that I think integers are. It seems to me that if those of us who would call ourselves materialists in the philosophical sense, must recognize that if we believe in both the material world, and the reality of at least some mathematical and logical objects - objects which appear eternal, unchanging outside of space and time - then we are clearly dualists. Obviously this shouldn't sit right with us, which is what motivated my interest. And here I am, left in the surprising position of agreeing with the Platonists, just quibbling over which types of mathematical objects have meaning outside of a human nervous system.
The possible solutions both seem incredible:
- True Platonic dualism. These objects exist outside space and time but are real because they affect us. They are causally asymmetric. It is this causal asymmetry that is key to their fundamentally different nature. They are a true uncaused cause, a true prime mover. Mathematical objects affect the material world, but the material world cannot affect them.
- Platonic monism. These objects are real, but are not causally asymmetric. They can change, and there is some kind of feedback from the universe to math and logic. It is worth recognizing that the question of whether physical constants change over time and space remains an open one, with recent evidence in its favor (See for instance Murphy et al 2003 for evidence of change to the fine-structure constant over time, which has stood up to scrutiny for a quarter century at this point.) But this is more fundamental: how can the nature of a triangle change? Or the nature of identity? The spatial dimensions? One interesting solution to Platonic monism is that advanced by Tegmark, who argues the "hyper-Platonist" monist argument that there are only mathematical objects. (Notably, Tegmark is the senior author on the LLM paper I linked to above.)
Monday, February 3, 2025
Is There Such a Thing as Color Harmony?
Some colors seem to go together; others do not. The same goes, even more strongly, for notes. Some form pleasant harmonies, and others are dissonant. In the case of sound, it's more mathematically obvious what's different: harmonious combinations of sound frequencies (like a major third) are constructively interferent, and dissonant combinations are destructively interferent. Light waves have frequencies as well, though we usually think of them in terms of wavelengths.
Another interesting difference in our perception of sound is that our nervous system automatically does a Fourier transform for us, before the sound reaches conscious awareness. This is why you can listen to a chord, and hear the individual notes, instead of a mess of superimposed frequencies. That doesn't happen when we're seeing multiple colors - others, there would be no such thing as non-spectral colors like pink, and your screen couldn't fool you into thinking you're seeing more than the three frequencies it's actually sending to your eyes. There are also biological reasons that certain colors are more important to us just by themselves, without context (like red, which could be fruit or blood), as opposed to certain isolated frequencies of sound (what cause would our ancestors have had to be really attuned to a sound at 640 Hz?) Still, I've always wondered if colors "going together", or pleasant mixtures of pigments used in famous paintings, are actually making some kind of harmony, and we're attracted to those harmonies without realizing.
Above: what comes into your ear. Below: what your brain receives, post-transform. This is a major chord.
I've considered doing a proper color harmony experiment. As I just said, computer monitors don't allow this - I can't just make images online and ask people which they find more pleasing, because computer monitors just emit red, green and blue, and we're not actually looking at those pure light frequencies. I would have to buy specific LEDs and have people do the test in person. The expense of this hardware exceeds my interest in the result; and, if you think somehow subconciously we can tell the difference between the fake RGB hue, and the perceived hue, you're already agreeing that we are actually doing a Fourier transform with light perception.
So what I did, using the fake color imitations coming out of our screens, was to look at a simple artwork with just a few colors, and alter them so that they are making harmonious or dissonant chords. Below is Piet Mondrian's Abstact Cubes. Since he uses three colors, that means we can make a chord.

My approach was as follows.

Above is the "scale" that results from this. Below are chord compositions based on the arbitrary C at 780nm. Major and minor are obvious enough; the discordant one is the root, a diminished second, and a diminished fifth. Do any of these compositions look more or less pleasing to you? And is this in accord with the harmony (that is, is major more pleasing than minor, minor more pleasing than the discordant one?) For reference, below each composition is each respective chord, made from the Szynalski tone generator. On the sounds, click to pop out if you want to hear them.
To avoid any special effect from these particular colors, I did another set of three (major, minor, and discordant) starting on the arbitrary E. If there's a harmony effect, the order of preference should be the same for both sets.
(Here I kind of like the discordant one, but I'm also a metalhead who likes diminished fifths.)
Maybe Mondrian knew what he was doing when he chose the root note for his composition. What would happen if I started with that one as the root, that is, normalized his red to the C below A440, and built chords from that? Here is the Mondrian color scale:

Here are the three harmonies, with the root note as his red, and the three same chords. Which one do you like most? Also, converting colors back into sounds - here instead of a piano I used an online tone generator to compare the major, minor, and discordant chords to the chord made by the actual colors in Mondrian's painting. The reason I used a tone generator instead of a piano is that, going in this direction (from light to sound) the color frequencies are likely to land between the notes,, and I wasn't about to de-tune my piano.
Interestingly, of these, the major chord is the one that most resembles the original work, though the original chord (the harmony from the colors of the original work, normalizing the red to the C below A440) is not a major one:
I hope you've escaped this without developing color-sound synesthesia. For next steps I may put up these harmonies (blinded) for a vote to see if people choose them consistently. If you want more, at some point I'm going to automate the pixel-counting of some famous paintings and assign wavelength values to each pixel (the Bridge at Giverny, the Scream, Starry Night, School of Athens) and see what kinds of chords those make.
Another interesting difference in our perception of sound is that our nervous system automatically does a Fourier transform for us, before the sound reaches conscious awareness. This is why you can listen to a chord, and hear the individual notes, instead of a mess of superimposed frequencies. That doesn't happen when we're seeing multiple colors - others, there would be no such thing as non-spectral colors like pink, and your screen couldn't fool you into thinking you're seeing more than the three frequencies it's actually sending to your eyes. There are also biological reasons that certain colors are more important to us just by themselves, without context (like red, which could be fruit or blood), as opposed to certain isolated frequencies of sound (what cause would our ancestors have had to be really attuned to a sound at 640 Hz?) Still, I've always wondered if colors "going together", or pleasant mixtures of pigments used in famous paintings, are actually making some kind of harmony, and we're attracted to those harmonies without realizing.
Above: what comes into your ear. Below: what your brain receives, post-transform. This is a major chord.
So what I did, using the fake color imitations coming out of our screens, was to look at a simple artwork with just a few colors, and alter them so that they are making harmonious or dissonant chords. Below is Piet Mondrian's Abstact Cubes. Since he uses three colors, that means we can make a chord.

- I found a website that would convert a wavelength into a perceived color (here.)
- I took the red spaces as the root note (e.g., the C in a C chord.) I chose the red spaces as the root of the chord because red does tend to be dominant, analogous to the way the root note establishes the basis of the chord, and because red is the lowest frequency. (Why didn't I choose tones for the white and black? While we're stretching the analogy, let's say that's black and white are the spatial equivalent or rhythm, an atonal drum beat.)
- I started with "C" (arbitrarily) at 780nm, which means the high C octave will be at 390nm. (For musicians this is counterintuitive. These are wavelengths, and as we go up the scale, the frequency gets higher but the wavelength gets smaller.) Also notice that humans can just barely see one visual octave.
- I used equal temperament, calculating the frequency as frequency = root frequency * 2^((number of half steps up)/12.)
- My hypothesis, which I expect will be falsified: individuals will display the a consistent preference for each color harmony (i.e., will consistently like major, minor, or discordant best.)
- As a side note: to highlight the likely inaccuracy of the colors (remember, we're looking at RGB monitors - you're not really looking at real shade) - I use two monitors when I work and even between the monitors, the colors are noticeably different. I hoped that the ratios of the wavelengths would remain the same, but on one of my monitors there was a difference between colors that looked the same on the other. Further complicating things, our color vision is not equally sensitive across the spectrum - it's much easier to discrimate a 1nm difference in the middle than at the edge of the frequency band we can see.

Above is the "scale" that results from this. Below are chord compositions based on the arbitrary C at 780nm. Major and minor are obvious enough; the discordant one is the root, a diminished second, and a diminished fifth. Do any of these compositions look more or less pleasing to you? And is this in accord with the harmony (that is, is major more pleasing than minor, minor more pleasing than the discordant one?) For reference, below each composition is each respective chord, made from the Szynalski tone generator. On the sounds, click to pop out if you want to hear them.
|
|
|
To avoid any special effect from these particular colors, I did another set of three (major, minor, and discordant) starting on the arbitrary E. If there's a harmony effect, the order of preference should be the same for both sets.
|
|
|
Maybe Mondrian knew what he was doing when he chose the root note for his composition. What would happen if I started with that one as the root, that is, normalized his red to the C below A440, and built chords from that? Here is the Mondrian color scale:

![]() |
![]() |
![]() |
Interestingly, of these, the major chord is the one that most resembles the original work, though the original chord (the harmony from the colors of the original work, normalizing the red to the C below A440) is not a major one:
I hope you've escaped this without developing color-sound synesthesia. For next steps I may put up these harmonies (blinded) for a vote to see if people choose them consistently. If you want more, at some point I'm going to automate the pixel-counting of some famous paintings and assign wavelength values to each pixel (the Bridge at Giverny, the Scream, Starry Night, School of Athens) and see what kinds of chords those make.
Subscribe to:
Comments (Atom)


