Tuesday, June 29, 2010

Should We Plant Rosetta Stones For Our Post-Collapse Descendants?

There are many, many theories about the simultaneous collapse of multiple Near East civilizations at the end of the Bronze Age in the 12th century B.C. To compare history to biology, if the medieval European Dark Ages were the cultural K/T boundary, then the Bronze Age collapse was the Permian-Triassic extinction. Literacy effectively disappeared for a thousand years.


The Doomsday Vault on Svalbard.

It's tempting to say that we wouldn't need such a long period to reinvent culture today. The world is more interconnected, so nothing short of a global catastrophe could bring about the same effect, and there is just more information available in more places. Tempting though these arguments are, it would be nice to have some further insurance that four centuries from now, an illiterate farmer plowing a field on top of what they don't know is the ruins of London will stumble upon a durable and intuitively understandable recording of basic knowledge. After all, it would save your descendants some sweat and tears.

What would we tell them? It should essentially be a naked-eye-readable Ikea-like manual for how to re-invent civilization, with a basic primer in how to sound out the Roman alphabet of course. We shouldn't so much care if they have a list of our kings as whether we can tell them the basics of agriculture, law, physics and biomedicine.

Such a plan raises the question: who cares? Are we doing this to spread our memes for their own sake? Did the British care after the loss of their middle American colonies that despite the loss of immediate revenues to the crown they had just sown the seeds to future hegemony of Anglophone values? Would the Hittites have cared that they could save the Romans or Persians a misstep or two by making sure their own military and political expertise were handed down through the ages? Such a project could only be undertaken out of compassion for people in possible future Dark Ages. In the short run the American Revolution certainly wasn't a benefit to the British Empire, and as Keynes said, in the long run, we are all dead. Indeed it's hard to think about morality in these terms, discussing people that don't yet and may never exist, whether or not they're related to you.

Friday, June 25, 2010

Three Thought Experiments About Wealth Distribution

Rawlsian, Nozickian, and Mischelian Worlds

In A Theory of Justice John Rawls famously argued that the just society is necessarily one in which the society's architects do not know ahead of time what their roles will be in it. If we don't know what our role will be in a society that we're going to be part of, rationally self-interested decision-makers would choose to create and be part of a society with a very egalitarian distribution of power. It might have been fun to be a plantation owner in the antebellum American South or a patrician in Rome, but if you fell out of the sky randomly into a social role in either place, chances were much greater in both cases that you would be a slave. The implication is that society-designers with foreknowledge are suspect because they can bias the game in their favor; even Thomas Jefferson surely believed that a man like himself would prosper in the kind of nation he was designing.

Robert Nozick took issue with the idea that the distribution of power in a just society would be close to egalitarian. He argued that if you can get from that just, egalitarian distribution to a non-egalitarian distribution through steps that are all just and un-coerced on their own, the resulting society must be just as well, even if it is no longer egalitarian. Making the thought experiment concerete: imagine an egalitarian Rawlsian world of a million people where everyone has $100,000, and one of these million is a gifted pianist named Steve. Word of Steve's amazing talent spreads, and he gives a concert attended by all his fellow citizens, all of whom gladly pay a dollar for the experience of hearing him play. Afterward, Steve has $1,099,999, and everyone else has $99,999. Steve is now the richest person in the world by a factor of 11. Is this unjust? If so, exactly which of these voluntary steps was the unjust step in moving from the Rawlsian to the Nozickian world?

A dimension which Nozick did not consider but which might affect the moral equation is the degree to which rational decision-makers vary in their abilities to make those decisions. Imagine after the first performance, all of Steve's fellow citizens are satisfied that they got their money's worth, but most of them say "once is enough." Some fraction of his audience was so moved that they go to a second performance of the same piece. Steve gets richer, and the repeat customers get another dollar poorer. Eventually his audience is reduced to a group of hardcore loyalists who find his performances so powerful, so emotionally rich, that nothing else in their lives compares, and they cease caring about anything else; they go to all his performances, they buy concert T-shirts, etc. and their wealth is transferred a dollar at a time (or more, if the pianist raises his prices) to the pianist, until they are broke. Whether or not Steve is aware of his role in their destitution is an interesting but separate question; for the time being let's say the lights in the concert hall are such that Steve can't see that he's reduced his fans to rags and bones, and he doesn't mingle after the shows. We can call this world the Mischelian world.

The dimension that Rawls and Nozick neglected in their thought experiments is variation of self-interested decision-making ability among the agents in the experiments (which may be transmissible across generations). If we add to the agents of the Nozickian world a distribution of rationally self-interested decision-making ability, we create the the Mischelian world described above. In the Mischelian world, some agents will be consistently better able to act in their rational self-interest because of superior working memory, rationality, critical thinking, or ability to delay gratification, and this will have an impact on the preservation of their material resources and the subsequent distribution thereof, all without any coercion. For those of us in liberal democracies, this is the world in which we are now living. Whether these traits are dictated by genetics or upbringing is irrelevant - what's relevant morally is that where agents fall along the distribution of these characteristics is not under the control of the agents, or for that matter of the Jeffersons designing new societies (not yet anyway.)


What Happens to Theories of Justice When the Agents Differ in Important Ways?

There are interesting implications for policy that fall out of these experiments, practical as much as moral, and they offer unpleasant suggestions to both side of the political spectrum. Perhaps most poignantly to redistributivists, it was a socialist (George Orwell) who pointed out in fiction the stubborn re-emergence of class structures in human societies even after violent attempts to cleanse it had been executed. (The "Inner Party"; hence members of the Chinese Communist Party explaining, with no hint of irony, why the uneducated rubes working in the factories and fields cannot be trusted with free elections and free speech.) This certainly seems to be bad news for attempts at economically egalitarian societies. This tendency of class structures to resist disturbance seems to be a socioeconomic parallel to Le Châtelier's principle: class re-emerges even where strong efforts have been made to obliterate it, even if the new structure rests on slightly different characteristics - blood relationships, ability to amass wealth, maneuver in bureaucratic hierarchies, or to parrot dogma as a loyalty signal have all been criteria for these structures at various points in history, including now. There are not many people who believe that individual differences do not matter to economic success, and that individual differences appear entirely randomly across populations rather than in consistent, inter-generationally robust association - the disagreement tends to be over the mechanism that made the agents different (i.e. whether due to upbringing, opportunities, genetics, etc.) Even Marx had to recognize that individuals could not be expected to produce incommensurate with their ability. Therefore, it's hard to see how strongly wealth-redistributive policies could matter unless they were carried out continuously. In that case, it is also very hard to see why this would not necessarily result in overall slowing of economic growth and therefore, less happiness for everyone.

Some implications of the Mischelian world are not entirely pleasant to the more libertarian among us either, as they justify some degree of paternalism. All but the most absolutist libertarian concedes that there is a cut-off in the distribution of rational decision-making ability, beyond which individuals cannot be responsible only and entirely for him or herself. If you went to Steve the pianist's concerts and found that there were children or retarded or senile or insane people spending their last dimes on him, would you be comfortable with that? What if they were "normal", but instead of piano concertos Steve were selling lots in gambling games, or opium, or sexual acts? All of us have more trouble behaving with rational self-interest with certain goods and services. Keep in mind, the question is whether a just society allows Steve to sell such things, not whether he is moral in doing so - if indeed there is a difference between those two questions. If legal distinctions limiting commerce and other types of decision-making responsibility are seen as necessary for justice in a Mischelian world, this could ironically be seen simultaneously as a justification for elitism and paternalism as well as a reward for less responsible, productive agents.

Perhaps the solution in the Mischelian world is to define a cut-off for humans at the margins of rational decision-making ability, and then keep them being responsible for their decisions. In fact in our own Mischelian world, that's exactly what we do, for children and the mentally disabled, though especially with children we don't have the means or resources to determine exactly where each individual's rationality falls at every age. So, in most countries we arbitrarily choose an age of majority. (Straight-ahead as it seems, this practice is still attacked, as you can see with one hot-under-the-collar commenter at my outdoor-activities blog.)

The problem with cut-offs (of products - "alcohol isn't addictive and harmful enough to ban, but heroin is") or agents ("my daughter is only 14 but she's mature enough to be allowed to drive") is that they're coarse-grained approximations (often using proxy indicators like age), and in fact even if we're not below the lower cut-off on the rational decision-making spectrum, there are no doubt lots of people with overall better rational decision-making capabilities than us. These cut-offs are also based on rational decision-making over the long-term, which is often obvious after brief interactions (with the retarded or demented) but not always. Should a just society require you to assess the rationality of everyone with whom you engage in commerce, or of the particular decisions involving this particular transaction? When I was in college, a mom-and-pop sandwich shop opened up that had 25-cent burger specials on Friday nights. Like every other bottomless-stomached college student, I showed up every Friday and got in line and ordered 6 of them. Eventually they put up a sign asking us to please consider ordering something else because they lost money on every burger. Of course, we all had a good laugh at this, and ordered more burgers, and soon after they closed. Clearly their belief that our good hearts would prevent us from taking advantage of their kindness was false, and we knew it. Should they have been protected from making this decision, or from us mean students from taking advantage of them? Having met the couple and talked to them briefly, I can attest that they seemed non-retarded and non-demented, but I still might be smarter than them. Does that make a difference? Should we be protected, from heroin or from Steve's heart-rending performances (or should the more rational people be handicapped?) Is the cut-off for children and the disabled only a result of our limited methods and resources? That is to say, in a future of finer-grained social justice, is it desirable for technology to allow us a gradient of protection for individuals who are even a little bit unable to control themselves in certain circumstances, for example your blogger who has confessed to his chocoholism?

The issue with the Mischelian world is that there are differences between the agents that make up these societies that result from nature and are not eliminable by the application of justice (not yet), even though they definitely affect whether a just state obtains. It also seems that the coarse-grained nature of the way we protect the irrational from injustice results only from limitations in technology and resources, rather than from a positive decision that our protections should be only so detailed, and not go further. I believe the discussion will proceed in this century, and if it goes far enough in my own lifetime, Safeway will start refusing to sell me Kit Kats.


Conclusion: Wealth is Only a Contributor to Utility, But So Is Genetics

It seems obvious that the reason anyone cares about economic egalitarianism is because wealth relates to utility; that is, wealth is a proxy indicator of happiness, which is what we care about and why you're reading this. It is not clear that there can be such a thing as a just society that cares about economic equality but not utility equality. Gross National Happiness is the best known direct quantitative indicator of utility. As with wealth, utility is in part dependent on agent-specific traits which differ along a spectrum, and there is some indication that there is such a thing as a happiness set point. Needless to say, like many factors influencing rationality, happiness set point is a given, not subject to the decisions of a just society but absolutely impacting them. In a just society, do the naturally happier individuals owe cheering-up efforts to the naturally sadder? To refer to another of Nozick's thought experiments, in a sense the individuals at the low end of the happiness set-point curve, in a just society that works to raise their utility, are anti-utility monsters: in an effort to bring the unhappiest up to a certain cut-off point, the utility of the happier is consumed, and overall utility decreases.

Finally, I have alluded twice to our sense of justice not yet being able to eliminate these differences and therefore having to enact justice in the way society deals with the hands that its agents are dealt. It is my sense at least in the U.S. that those who are most in favor of wealth-redistributive policies are likely to be the most strongly opposed to even investigating whether there is a genetic basis for the rationality differences between agents, much less whether our sense of justice dictates that we do something to improve them. Such differences could certainly not be the only reasons for the persistence of inequality, but if they exist they would certainly be a root cause. In the coming decades we will understand much more about the genetics of cognition, and going forward, policy discussions about wealth inequality must make reference to these findings.

Thursday, June 24, 2010

How Robust Would the Data Cloud Be To Nuclear Attack?

I imagine more stable than if the data were all confined to our individual computers - but are back-ups sufficiently separated, geographically speaking, to minimize disruption by EMP, power interruption, or destruction? Is this a part of operating procedures of major data storage and internet infrastructure companies?

Wednesday, June 23, 2010

Hyper-Realistic Video Game Features Character That Wastes Countless Hours on Video Games

Los Angeles - in what industry leaders are calling "revolutionary", a new video game features characters that, instead of performing the rational goal-directed actions their players wish of them, sit distracted in front of computers for hours of each game-day. Gamer Jon Rutland isn't so sure. "I want the character to go out and rack up points so he gets to the next level but for some reason he just sits there in front of a screen. It's so frustrating." At press time the executive center of Rutland's brain had no comment.

A Venture Capital Model for the Entertainment Industry

If you're like me, you wonder a) what dummies actually pay to see movies anymore and b) what dummies actually buy CDs. Simultaneously I realize it's not sustainable if I want to keep seeing movies and listening to new music. This is why I find the upcoming Nazis-on-the-moon Finnish film project Iron Sky to be so interesting: it's being made piecemeal based on donations it gets from people that want to see the final product.

When you think about art, you have to think about the economic aspects of its creation. People can write short stories or books as one-person essentially zero-cost projects, whereas a visual artist needs special materials and is more likely to require training in specialized motor skills. Musicians need a special room and lots of special equipment (including their instruments) to create recordings. In film, even a low-budget independent movie is a large time-consuming affair requiring multiple administrative individuals including people just to coordinate the time and resources. (When was the last time you needed a set manager to write a novel?) Consequently you expect that capital intensive forms of art must be more sensitive to sales to be sustainable, so the more a medium costs, the more financiers will exert more influence over the final product. But profits from non-live art result from making copies. Consequently if capital-intensive art is to survive the information age, it must find ways to innovate.

This is in fact what we're seeing. Erosion of profits happened first in the music industry, because of hardware requirements - in 2001 it was much harder to watch and copy movies digitally than music. One interesting idea was Radiohead's pay-what-you-think-it's-worth approach. For them, this was successful. Whether this will work sustainably for them or anyone else is an open question though the consensus is no. Now, the technology gap that pressured the music before the film industry is closing quickly, and in Iron Sky we're seeing the first example of an end user-financed model in film. (I'm sure this has happened before but this might be the most widely-reported instance so far.)

My prediction is that in the near future this model will go one step further. By that I mean, donations are inferior to investments because with donations you have no input into the characteristics of the final product, and you don't benefit from its success. (If I gave $100 to help Iron Sky I would be happy to see the final product, but very pissed off if it ends up being the top-grossing film of the year.) A venture capital (as opposed to donation) model is good news for the film-going public, because it tightens the loop between the film-consumer's taste and the type of movie that gets made, not only because the public votes with their dollars, but because fewer decisions are polluted by insider politics; there are reasons less noble than artistic value or even profit that affect decisions in this and every other industry (e.g., projects not going forward because so-and-so didn't like like such-and-such in college or slept with someone's ex-wife, etc.) But I don't think consumers will be satisfied with donating money to a pre-formed idea. Imagine the film-financing model of the future: a team of scriptwriter, director, producer etc. forms with a rough idea and announces they're taking donations. Buzz goes up among appropriate sector of the public (science fiction fans, human rights activists, whatever demographic the idea will appeal to.) People can donate $1, $10, $100, whatever they want, but factions can form: faction X will only give their money if you get Steve Buscemi to play a certain character, faction Y will only give theirs if you set it in colonial Mexico. If the production team is smart they'll lay out rules in advance to quantify script-impact per dollar, some modest revenue sharing, and (importantly) refund guarantees of X cents on the dollar if the project never gets made. Terry Gilliam might be in trouble in this future.

There's no reason this wouldn't work with music either. Not that Metallica needs seed money to make new albums, but if they did, how many fans do you think will say "here's a hundred bucks, do whatever you want" versus "We want something that sounds like the second Justice album"?

The intermingling of arts and commerce has always produced an uneasy tension, again particularly in those arts with capital-intensive production processes. But before anyone gets too nervous about the financial despoiling of the film and music-production process through the mechanism I just proposed, keep in mind that painting in Europe seemed to get on just fine with this model for the first three or four centuries of its independence from centralized religious and political rule. It was called the patron system. No doubt there are paintings from Michelangelo we don't have because of this; no doubt we enjoy the ones that we do have.

Added later: if there is a medium whose revenue models will likely remain more conservative, it's video games. As of 2009 major commercial video games are funded to a maximum of about $5M (figure from Wikipedia, attributed to McGuire and Jenkins), and the top 10 MMORPGs of 2008 were globally grossing anywhere from $50 to $500M each - and, relevant to the film industry, this wasn't the release year for any of the games. Name me one film that's ever grossed $50M in any year but its release year. These are ongoing revenues for a product that cost $5M max to develop. Video games are also more capable of controlling their own distribution. Copy protection technology is better, sometimes there is specialized hardware required, and MMORPG require interaction across the web which makes piracy more difficult. Video games will likely keep the old profit models, because they can.

Compare to the American film industry in the same period. The top ten 2008 MMORPGs total to $2.5B in global profits, exactly what the top 10 films of 2008 added up to in the U.S. I also believe the film grosses are just 2008, i.e. don't include DVDs etc., but I'm not sure. (That might temper enthusiasm for the ascendance of video games, but it ends up being less important than you think.) Interestingly, the global market for video games and movies is also estimated to be about the same, ranging from $7 to $30B, again according to the McGuire and Jenkins report. Again: the top ten video games were already-released games with minimal ongoing costs. The top 2008 U.S. earners were all 2008 releases.

To make the point more clearly I looked up budgets for each of these movies. I'm assuming that marketing costs are already included in the budget figures. We can start to see the obvious difference here:

MovieGrossBudgetAbs ROI% ROI
Dark Knight533185348188
Iron Man318140178127
Indy Jones IV31718513271
Hancock2281507852
Wall-E2241804424
Kung Fu Panda2151308565
Twilight19137154416
Madagascar 21801503020
Quant Solace168200-32-16
Horton Who155857082


Now compare that to MMORPGs. Figures for specific development costs for the top 10 2008 MMORPGs weren't available, but let's say that they were all at the top end of development costs ($5M ea.) The film figures above show us that the top 10 films netted an average of $108.7M each, an average %ROI of 75% (again, that's in their release year, after which there's a steep drop-off.) Compare to the top MMORPGs, which net an average of $245M each, and an average % ROI of 4,900% - and that's during consistent follow-up years. Add to that the nature of the media which allows video games to control piracy and distribution better than movies, and I'm amazed that movies still dominate public attention relative to video games as much as they do.

Monday, June 21, 2010

The Greatest Thing That Ever Happened

Portugal-North Korea 7-0. It seems that Juche is not an ideology suited for soccer. Thank you Portugal. Surprisingly (to me at least) feelings in South Korea for the North Korean team are quite sympathetic, if press accounts are to be believed.

Saturday, June 19, 2010

Kublai Khan Followed One of My Suggestions

Earlier I'd proposed that it would be of benefit to the whole world if every written language adopted a modified Roman alphabet for its writing system, particularly kanji-based and Arabic scripts. The idea is not so much to make everybody write using the Roman alphabet, but to use any phonetic writing system, of which alphabets are the most compressed and therefore the easiest to learn. In fact in one country with an Arabic-script language this already happened - Turkey - though I don't know if anyone has ever calculated the economic benefits to Turkey of this change. (It's been recently discussed with the Turkish government by the Kazakh government, another Turkic-language nation that seems to be considering such a change.)

As it turns out this has been tried for what is now the single biggest block of people using non-Roman characters. Interestingly it was centrally imposed by the rulers of China, but not by the Han. Kublai Khan (a Mongol) commissioned a writing system from a Tibetan officer in his empire, an orthography for all languages contained in China during the Yuan Dynasty. And indeed, it achieved some level of use - oddly, persevering longest as a liturgical orthography - but the Ming Dynasty rejected it, and it disappeared from history.

Undocumented Harvard Student Not Deported

Information here. Is there any irony to be had in the near-universal public indignation about a different kind of "undocumented" Harvard student being thrown out of the university? ("String him up! Oh, he's an illegal immigrant, not an illegal applicant? Well, um, I guess that's different then.") Forgive the condescension but it's indicated where questions are studiously avoided by those of us who don't like to question our assumptions.

Here's one angle: is there a moral difference between those two situations? Furthermore, is the U.S. right to selectively keep an undocumented (foreign national) Harvard student but not a 52-year-old landscaper? (Answer: yes.)