Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Thursday, January 4, 2024

A Brief Sketch of History: Subdivisions of the Iron Age

Large sedentary civilizations emerged where groups of people were forced to, and then rewarded by, central organization of labor - often in marginal environments (dry river valleys requiring irrigation, or in the special case that generalizes the rule, rice farming.) Because of the more rapid diffusion of ideas, the Silk Road regions (Asia, Middle East, Europe, North Africa) were advantaged over the rest of the world. (Hence, the description here is focused on the Silk Road macro-region.) Europe, in turn, was advantaged over China because geography predisposed the formation of multiple small states which acted like incubators for cultural selection; and Greece was advantaged over the rest of Europe given its peninsulas, mountains, and islands. This arrangement still allowed eventual cultural diffusion, overtaking China only with the printing press. The other factor which allowed Europe's situation to obtain was the separation of religious/moral authority and secular authority - a Pope, and kings (as opposed to the unity of these institutions throughout much of Islamic history, and the relative marginalization of religion in East Asian history.)

States demonstrate a natural cycle of 200-250 years. Even if a nation by the same name, on the same territory, with the same people, lasts longer than this, typically there is a transition period. The natural experiment of a large state on a fertile plain showing this pattern is China, but Rome also demonstrated the Principate and Dominate periods, the Ottomans had a similar pattern, and it can be seen elsewhere as well. In these rough divisions I am focusing on the Silk Road regions of the Old World - China, the Middle East, North Africa and and Europe.


THE IRON AGE: 1200 BCE-1800 CE

Early Iron Age: 1200 BCE-500 CE. Began in Europe with the Bronze Age Collapse, saw the rise of multiethnic administrative empires and coinage, and thus the Axial Age. In China, this contains the end of the Shang Dynasty, as well as the rise and fall of the Zhou and Han Dynasties. In Europe this can be further divided into Early Iron Age 1 (1200-600 BCE) featuring palace economies, and Early Iron Age 2 with the later development of oligarchic rule and early market economies.

Middle Iron Age: 500-1500 CE. Roughly co-extensive with the medieval period. in Europe this begins with collapse of Western Roman Empire, the weakening of the Eastern Roman Empire, the collapse of Persia and the spread of Islam. In China, it begins earlier with the end of the Han Dynasty. The Middle Iron Age is characterized by the spread of supra-ethnic philosophies and the dissolution of large empires, which can be thought of as an ecological model of cooperation within empires no longer exceeding competition within empires; oligarchies quarelled amongst themselves, and social or ethnic outsider groups benefiting from cultural diffusion (Germanic tribes in Europe, or the Yellow Turban Rebellion in China.) This period is marked by states and peoples developing a sense of identity if not patriotism, and especially by nomads occasionally overwhelming established states, with the Mongols as the military high water mark of nomads in history, with their decline signaling the end of this period.

Late Iron Age: 1500-1800 CE. In the West this contains the Renaissance, Protestant Reformation, Age of Discovery and Enlightenment. In Russia, it starts with the Great Stand on the Ugra and eastward expansion. In China, it's the parallel end of the Yuan Dyansty. Ultimately it begins with the spread of gunpowder, as seen in the gunpowder kingdoms, as well as the printing press, which had its greatest impact in Europe both for the good (Europe's domination of the world starting in this period) and the bad (religious civil wars as Northern Europe could communicate more easily.) At this time, the technological advantage of sedentary societies began to overwhelm that of nomads. Simultaneously, the benefit of technological innovation in the crucible of a sort of geographically-enforced natural federalism in Europe allowed Europe to outstrip China and colonize the world. The use of gunpowder as a source of energy more powerful than human or animal muscle anticipates the Industrial Age. Like the Mongols, Napoleon was the high water mark of Iron Age warfare, and was ultimately undone by the home of the Industrial Age, the United Kingdom.


Though not the focus of the post, the Industrial Age comes with its own subdivisions: the first wave in 1790-1830 with steam and water power leading to factories, materials extraction and textiles, an interim with three "transition wars" in the West (the Crimean War, American Civil War, and Franco-Prussian War; the Taiping Rebellion still appeared very much like a late Iron Age religious War, like a Thirty Years War compressed into half the time) with the second industrial revolution 1870-1910 converting industrial power into consumer goods; this culminated in World War I, the first industrialized war.

Thursday, April 5, 2018

The Great Stagnation: Problems Are Harder, and/or Talent is Misallocated

On Rationally Speaking, Julia Galef interviews Michael Webb about increasing research inefficiency - for example, Webb cites the statistic that today, to get another Moore's-Law-Doubling, it takes twenty times as many researchers as it did in the 1970s. It's not obvious that research is more and more inefficient because it's still producing improvements at the same rate, but only by consuming more and more resources to maintain the same rate. He uses the analogy of mining, where you have to keep going further and further into the ground to get to the gold, or the coal, or whatever it is. The longer the mine is operating (assuming a single central shaft) the bigger this distance gets:
[The pre-work you have to do in order to make a contribution] is a lot further today than it ever was. The amount of knowledge you have to have as a scientist to be able to get to the frontier, to make these contributions, is just so much larger today. And you can see this from the amount of time of it takes to do a PhD, how old an inventor is the time they first take out a patent, the size of research teams. Ben Jones, he's a fantastic economics professor at Kellogg, has papers that document these things.

That means that for individuals, they could either end up spending more time studying, which is what you see in the PhD length, or you see that they just focus on narrower and narrower fields. For a given amount of time, you only learn something about a much, much narrower field. Which might mean that you just have less good insights if it turns out that for all you progress, the fields...The wider field you have to be combining with some knowledge from quite distributed science.
I had previously argued for exactly this idea as an explanation for technological stagnation (or, prior to that, increasing research inefficiency), and with admitted nerve called this ultimate economic heat death "Caton-Schumpeter stasis."

Another factor is the availability of talent, which operates on the assumption that talent is unevenly distributed in the population and is a constraint on technological progress. Consequently there are also the ideas of talent dilution and talent mis-selection.

Talent dilution is the idea that there are only so many Fermis and Oppenheimers, and there is a negative marginal utility to adding more people to the research endeavor. The otherwise productive people are overwhelmed with meetings and emails and swamped by mediocrity. This is actually optimistic, as it suggests that we could return to research productivity by restricting the size of research teams. That this is not already happening suggests that either this idea is wrong, or that people putting the teams together have perverse incentives (quite possible) but, since these are mostly private sector endeavors, somehow overwhelm the profit incentive without unsustainably driving the enterprise into the ground - which seems hard to believe on its face.

Talent mis-selection is a little more subtle. The track to become a physical scientist or semiconductor engineer in the mid-20th century was not as "artificial" (i.e., externally imposed) and clear as it is now. The cause of your having a career in STEM was likely early achievement in that field, because your primary motivation is to explore things in STEM, not to make money or move up in a hierarchy.* Getting good test scores, being a well-behaved student, and knowing how to game your applications is probably much more important now than it was then, and may not be sorting for the actual most productive talent. On top of this, the world today is just a lot more interesting, with a lot more (easy!) options, for someone who's good at quantitative thinking, and the best may not be going into research - they're going to Wall Street or heading to startups. (There are pretty solid statistics that med school applications drop when the economy is good and vice versa - I'd wager that the correlation is even more true for physical science and engineering graduate programs.) By selecting for the type of person who focuses for their first quarter century of life on collecting prestige coupons, climbing hierarchies and gaming applications, you are very likely selecting against exactly those people who will be most productive in STEM, i.e. the kind of person who is directly motivated and rewarded by work in STEM. (For a great discussion about the gap in social cognition or lack thereof between STEMmy and other types of people, see this Slate Star Codex post.)

To put a finer point on the idea of talent mis-selection, let's look at another domain of achievement. Imagine a national program claiming to identify "the nation's top talent in military conquest", complete with an entrance exam and rigorous interviews. You need a reference from a military historian. Those not wearing a tie to their interview are shown the door for their disrespectful and noncomformist behavior. How likely would it be to find the next Genghis Khan or Hannibal this way? The most interesting thing to do for a real potential conqueror would be to go wherever there is active conflict, and the "successful applicants" would likely be annihilated in a real war by the person who went to Syria and became a warlord.

[Update: you may be aware of the Thiel fellowship where students are paid to drop out of college and pursue a business. Business Insider has been following up on how its Fellows have been doing. The reporting certainly shows survivor bias since I don't see a clear "out of Y fellowships awarded, X are currently successful outside education" - and a lot of these students would have been successful anyway so we don't know the denominator. Still, I suspect the fellowship as an intervention is increasing the rate. Still: what gets measured gets addressed, which is why every metric ends up getting gamed, and looking entrepreneurial is no exception, so people are no doubt trying to game the fellowship, and we're back to the mis-selection problem again: "Entrepreneurship has become a line you put on your resume," Thiel says, apparently non-ironically - and to paraphrase Thiel's complaint, in the businesses his fellows are founding, he's getting lots of Facebooks but few flying cars (one exception in this list here.)

[Added later: here's great article summarizing a paper, which simulated how the funding and promotion incentives of scientists are degrading the average quality of work, and unsurprisingly a reproducibility crisis in multiple fields. This would be true even with good talent allocation - because the entire system is selecting for publishable but not necessarily true findings. You might call this third problem talent distraction.]

[Historical example: in discussing the erosion of China's technological and administrative lead over Europe during the second millennium CE, Brad DeLong offers the following as one among many causes:
Perhaps the root problem was that with triple-cropping rice strains the wet-rice fields were too fertile, the governmental bureaucracy too effective, and the avenues of establishment-oriented upward mobility to the striving and aggressive too open. After making a little money the logical next step was to buy some land. Because the land was rich, because labor was plentiful and cheap, and because the empire was (most of the time) strong internally, one could live well after turning one's wealth into land. One could also easily make the important social contacts to pave the way for one's children to advance further. And one's children could do the most important thing needed for upward mobility: study the Confucian classics and do well on the examinations: first the local shengyan, then the regional juren, and then the national jinshi. Those who had successfully written their eight-legged essays and made proper allusions to and use of the Confucian classics would then join the landlord-scholar-bureaucrat aristocracy that ruled China and profited from the empire. In the process of preparing for the examinations and mastering the material needed to do well on them, they would acquire the habits of thought and values of a Confucian aristocrat landlord-scholar-bureaucrat. Entrepreneurial drive and talent was thus molded into an orthodox Confucian-aristocratic pattern and harnessed to the service of the regime and of the landlord class: good for the rents of the landlords, good for the stability of the government, but possibly very bad indeed for the long-run development of technology and organization.
This is a nightmare, real-world example of talent distraction and would also produce talent mis-selection, and Delong's thesis merits further study.]

*I'm all for scientists getting paid. A statistician once pointed out to me that if statistician jobs were suddenly paying 10x more, you might not get the best statisticians - you would get the people best at obtaining stable large paychecks signed by someone else, and some of them will hopefully be good statisticians.

Sunday, April 14, 2013

New Call to Regulate Drones: from Google Head Eric Schmidt

This is cross-posted at my technology blog, Speculative Nonfiction.

Article here. There's a clear motivation for governments and the enforcer class to have a monopoly on this technology, and Frank Fukuyama among others had predicted some time ago that governments would start creating this monopoly shortly (is this why Chris Squire put down his capital investment of a drone manufacturing facility across the border in Mexico?), but why from the private sector? I'm not sure what Schmidt is doing here. Is he just going on record stating his discomfort with drones so Google can distance itself from perceived vague connections to sure-to-come abuses of technology?

In any event, if you're uncomfortable with your neighbor having a drone, I'm ten times as nervous when the police are allowed to have drones but the rest of us are not.

Monday, March 4, 2013

First Law Making Drones Illegal for Individuals, Okay For Government

This is also posted at my technology blog, Speculative Nonfiction.

A Senator in New Hampshire - a Republican (more on this below) has proposed a law that would make all aerial photography but government aerial photography illegal.

Francis Fukuyama warned that governments would soon become threatened by this technology, and outlaw it except for their own use in enforcement.

Regarding the New Hampshire Senator's party afiliation: I would like a party that safeguards individual rights, but the GOP is clearly not it at the moment.

Sunday, September 9, 2012

Drone Technology: Institutions vs. Individuals

In this wired piece by Chris Anderson on the drone boom, he mentions that hobbyists are currently ahead of militaries. This is good, but not a permanent state of affairs. The clearest threat from drones is not the singularity (yet) but rather a world of ubiquitous surveillance if the drones are only in the hands of states.  Consequently we shouldn't be surprised when the legislatures of the world start making this technology illegal, except of course for state-controlled institutions. Frank Fukuyama, himself a drone enthusiast, has already expressed this concern.  Drone hobbyists would be well-advised to set online news alerts for "legislation" "drone", and organize pre-emptively in anticipation of the inevitable paranoia that politicians will promulgate through the media.

Maybe this is why Anderson's company 3D robotics has its headquarters in San Diego and a new manufacturing center in Tijuana, just across the international border.

Friday, January 7, 2011

"Chavez Squeezes Scientific Freedom"

To the rallying cries of "Let's be more like West Virginia!" and "Let's be more like North Korea!", we might add "Let's be more like Venezuela!" The headline above is from Nature. (Scientists have noticed Chavez doing other questionable things before.) While Eric Cantor is not yet in Chavez territory, it's worth it (and fair) to ask him directly if he would like to be.

Saturday, May 1, 2010

Future Corporate Personhood: Union Pacific Meets Skynet

This is cross-posted at my hard science and science fiction blog Speculative Nonfiction.

In 1886, the U.S. Supreme Court issued a legal decision which is regarded as significant because it in effect granted corporations legal personhood. Southern Pacific Railroad was the defendant in a case brought by Santa Clara County, California (the modern day location of Silicon Valley).

We can all concede that this seems strange on its face. A corporation is a social and legal fiction that exists by fiat of its owners and stake-holders, and has no free will of its own. By the same token you can't get out of a car accident by saying (for example) "It was my mirror that clipped you, not me, and the mirror is the car's property, not mine." By the same token, your dog doesn't own his leash, your computer doesn't own your printer, etc. - you do - and you're responsible for them. Thinking about it this way makes it seem even more bizarre that a mere century and a half ago, in much of the U.S., you couldn't own property if your skin wasn't the right color, because you yourself were property (or could be). Again: a computer can't own a printer.

That the legal conceit of corporate personhood seems strange does not mean that it is bad. There are lots of mutually-agreed social hallucinations that have ended up benefiting their participants, materially or otherwise: social mores, nation-states, games, religions, and certifications. Some of these mutual hallucinations differ in that they are considered inarguably "real" constituents of objective reality outside of their human participants, while some are not. Some of them are more voluntary, artificial, and explicitly engineered for a purpose; the trend is toward these. This is a good thing, and corporations are a prime example. Everyone knows that a corporation isn't a person, but legal conceits are like the social equivalent of capital markets or enzymes: as long as it's above board, everyone winks and then is okay calling a spade a club until you get the loan, or you lower the energy of the transition state. Then you get over whatever barrier you had to wealth or energy generation, and everyone gets what they want.

Some seem concerned at the unnaturalness of these legal conceits and fear that once we legitimize such silliness as corporate personhood, we open the door to a future in which humans exist enmeshed in an increasingly byzantine network of such arrangements. This phobia is portrayed darkly in books like The Unincorporated Man, which attempts to convey a dystopia in which one such future legal conceit is the opposite of the Union Pacific decision, where individuals incorporate themselves and sell shares of themselves to investors. In fact, due to a desire for wealth creation driving an increasing profusion of complex social arrangements, a world like that one is almost certain to come to pass, and furthermore I hope it does! No, I personally cannot imagine a world of personal corporatehood, and if I woke up in 2100, I'm sure I would have a hard time adjusting. That in itself is no argument against such an arrangement. In the same vein Julius Caesar would have been equally clueless about (and perhaps frightened of) the concept of corporate personhood, though if he were born in 1960 I bet he would get it just fine.

There is one concern I do have for the future of corporate personhood specifically that I haven't seen discussed and which I grant will seem esoteric. Corporate personhood is a safe legal fiction only when the property owned by the corporation is not equivalent in its abilities to a person. That is, there was no confusion about the bounds of property and person in 1886 or today. None of the steam engines sitting in Southern Pacific railyards had the potential to achieve a place in the Southern Pacific boardroom. This observation will seem less pointless when we recognize that some of the property of corporations will almost certainly, by the end of this century, be at least equal in decision-making ability to board members. In 2100 the steam engines or at least the computers running them will probably have a say in corporate governance. If a corporation consists of a single powerful computer, that corporation will then be a person, both legally and (de facto) cognitively.

If you've read Stross's Accelerando, it's hard not to think of the computer system that was constantly spinning multinational shell corporations around itself to protect its owner's interests. We can argue later whether these machines are "conscious", "intelligent", or any other adjective that interests you. But will we see incorporated expert systems with no human board members? Is this a threat to the human economy? Is this an argument for or against designing a constitution in a legal programming language that has to be compiled and can't execute until its internal logic is consistent? The utility of these legal fictions is that we live in the real world and we can reel them back when they get too non-sensical or damaging; at such time that corporate personhood is deemed a threat to human happiness and survival, we can eliminate the convention. We can't do this with a corporation that has literal vested interests of its own.

Thursday, April 29, 2010

A Different Model for Biomedical Risk and Innovation

One of the interesting things about the FDA as a government agency is that it largely seems to be trusted by Americans, and respected (if grudgingly) by the industry it regulates - possibly because its domain, medical science, is an area which requires non-fakeable technical expertise. Its "strange" respectability is highlighted in this book review.

J.S. Mill pointed out in On Liberty that medicine was a market ripe for failure, because most consumers of medicine are not in a position to evaluate its safety and efficacy. His observation presaged the problems that triggered the creation of the FDA (or its ancestor) in the early twentieth century: too many kids were dying from liver failure, brought on by the alcohol and ethylene glycol (antifreeze) that America's own witch doctors were using to sweeten their literal snake oil.

The drawbacks of having a central agency responsible for allowing the marketing of new drugs are several. First, like any agency, FDA is not immune to politics: for example, post-TGN-1412, scrutiny of any cytokine-interacting antibody increased to an almost paranoid degree (in corporatese this is known as a CYA). Perhaps more sinister on this count, it is often the case that a compound which receives marketing approval in its home market (where its developer is headquartered) does not receive it overseas (case in point, Paris-HQ'd Sanofi-Aventis and their weight-loss drug rimonabant, approved in Europe with a few side effects, and rejected in the U.S. for those same side effects. Coincidence?) Furthermore, all government agencies have limited resources, subject to the vagaries of central planning and economic swings. The decade began with an ominous trend in increasing approval times; this is the metric that individual biomedical companies care about. Much more alarming was the trend of decreasing numbers of new chemical or biological entities being submitted per year. This is the best index of overall biomedical innovation in a market and is affected not just by the agency but by the greater R&D environment of the industry - which of course, anticipates increasing conservatism in the agency that makes or breaks it, since you don't spend hundreds of millions chasing a compound that can't be legally marketed. Fortunately the NCE and NBE numbers are turning around; in 2009 there were 9 NCEs and 13 NBEs, compared to 1 NCE at one point in the mid-2000s.

It's important to point out that FDA approves drugs for a specific indication, not just for general sale, but that physicians are still allowed to prescribe them off-label. This creates strange pressures. First, the temptation for companies to actively market off-label is great. It's highly illegal, and highly lucrative (recent off-label marketing settlements in the last few years easily reach a half billion or more). More important is that this creates dangers for patients. Physicians have the authority to prescribe off-label at their discretion, and they often do, particularly with psychoactive drugs. While the companies that develop the drugs are obligated to produce general safety information, there are under no such obligations to produce safety information (much less efficacy) for off-label populations that physicians end up prescribing to, though they're stupid if they don't explore this as a potential new revenue source.

The point is this. By having an agency with such a structure - that approves drugs for specific indications - we create these pressures and these false senses of security (that off-label use is as well-characterized as the labeled indication). An agency that approved a drug only for safety, but not for efficacy, might be more beneficial to everyone. In other words, the FDA would require the company to label their drug as not harmful within certain dosing parameters, and contraindicated with certain other medicines. Since we're already trusting physicians to read NEJM and prescribe the drug on that basis, isn't this a far more efficient way to get drugs on the markets to patients who need it, without compromising safety? This discussion is a non-starter in the current administration but perhaps in a vigorous developing economy with a sense for social experimentation, we might see whether this would provide a greater good for a greater number; that is, increase innovation speed without increasing risk to patients. With ongoing drug research, overconservatism and the slowing of innovation is absolutely an ethical problem, because we can't forget the patients who haven't yet gotten sick.