Dust flux, Vostok ice core

Dust flux, Vostok ice core
Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001).
Showing posts with label scale invariance. Show all posts
Showing posts with label scale invariance. Show all posts

Wednesday, April 2, 2014

From the small to the big: earthquakes, avalanches, and high-frequency trading

I've been talking about scale invariance a lot lately. I became interested in the topic quite a few years ago in the context of geological phenomena like earthquakes and avalanches. The Gutenberg-Richter law describing the size-frequency relationship for earthquakes was one of the first natural laws based on scale invariance, but interest in the topic really picked up with the Bak et al. paper in 1987 (pdf - may only be a temporary link).

The cause for this relationship is still foggy, as is the physical mechanism between the small and large earthquakes. The best proposed explanation is that the scale-invariant distibution of events allows for the most efficient flow of energy (and information) through the system (but it isn't clear why that should be so).

So back in the early '90s I was estimating recurrence intervals estimates for certain hazardous events and I started trying to work out a methodology for detecting scale invariance in the geologic record. Using the Gutenberg-Richter Law, you can estimate the likelihood of a large earthquake in an area based on the number of small earthquakes. There were interesting implications for areas where the recurrence interval of large earthquakes is longer than the local recorded history (as in much of Canada). At the time, there were seismic hazard maps produced by the USGS which showed significant earthquake risk in zones which mysteriously ended right at the Canadian border.

One of my classmates in my undergrad days (we're back in the 80s, now) studied the correlation between microquakes and fluid injection at oil extraction operations in southwestern Ontario. The oil companies were surprisingly cooperative until they understood the point of the research, after which they started to withhold data.

And here is the mystery. The principle of scale invariance in earthquakes would suggest that increasing the number of small earthquakes should increase the number of large earthquakes at least in the short term. Yet our understanding of the dynamics of earthquakes tells us that lubricating the fault should allow stresses to be relieved through the small earthquakes, which in the long-run should reduce the chance of a large quake in the longer term. (This idea has been proposed at various times over the past fifty years, but for various obvious reasons, it has never been deliberately pursued).

By the early 2000s, other geophysicists (notably Didier Sornette, but there were others) had moved a portion of their data processing expertise into studying econometric time series. I made this move later as I gradually came to appreciate the key problem with developing quantitative techniques when the data were suspect. First of all, the measurements themselves are inaccurate. More importantly, our estimate of the timing of each observation was just that--an estimate. Most quantitative methods assume that the observations are evenly spaced in time. Failing that, they assume you know the timing of your observations. The consequences of errors in the timing are terrible, and frequently underestimated. The point is that it is difficult to develop excellent quantitative methods when the data are terrible.

The big advantage of working with economic time series--pricing data, in particular, is the elimination of the observational errors. When a transaction occurs, there is no doubt about either the price of the time--right down to the millisecond scale.

I started looking at market macrostructure--because (several years ago) nothing interesting ever happened on a scale of less than about an hour. Until just the past few years. Suddenly, strange, rich, unusual behaviours began to occur in individual stock prices, and even indices, on the millisecond scale. I didn't know what was causing it--but it sure was interesting.


Three seconds on the tilt-a-whirl.

This was the signature of onset of HFT. I was initially interested in it for entirely different reasons than most of you. After Crutchfield's (1994) paper (pdf) on emergence, I had been pondering the idea of how to recognize a fundamental change in a complex system. Again, my interest was in the earth system as a whole, and how to recognize whether or not new observations were pointing to a fundamental change in its mode of operation.

Given our understanding that the number of large avalanches is positively correlated to the number of small avalanches, it seems pretty clear that (as Nanex and Zerohedge has been saying) the damaged market microstructure is mirrored in the increasing number of flash crashes since Reg NMS. Unfortunately, our murky understanding of how the microstructure causes the macrostructural changes can be used by the regulatory authorities to avoid investigation. They can't see a smoking gun.

We would normally expect the micro-crashes to eventually relieve imbalances in the system, improving its long-run stability. (Perhaps this is how the SEC justifies the practice). But unlike earthquakes and avalanches, these uncountably many small crashes are not reducing the imbalances. One reason is that the cause of the imbalances is separate from HFT--the dollars keep being shoveled to the top of the mountain as fast as, if not faster, than HFT brings them cascading down. Another reason is that the trades (mostly) get unwound--so the exchanges push most of the snow back to the mountaintop after the avalanche.

HFT certainly benefits unfairly from the system, but isn't responsible for it. If anything, it is a symptom of corruption--but the cause of the corruption is elsewhere.

Accordingly, my modest proposal for dealing with HFT is this--nothing. Don't bust trades--let them stand. I'd be curious to see the response of the various Ivy-League endowment funds and pension funds when they suffer brutal, near-instantaneous, multi-billion-dollar losses. At a guess, I would probably hear the screaming up here. How would real companies, producing real products, react to a sudden monkey-hammering of their stock price, especially if it triggered debt covenants? Maybe they would all exit the market en masse. It might even force a real change.

Sunday, March 30, 2014

Scale invariance in the changing economics of resource extraction

Some simple discussions today that follow from our last exciting episode.


First issue - there is a limit to the size of deposits (given our current state of understanding). For gold, you can't have a hydrothermal flow system with a radius of hundreds of km--the crust is too thin. Also the crust has too many heterogeneities, which can each trap some amount of the gold in a circulating system. So at some point, the probability density for the right tail has to drop off a cliff, instead of declining steadily forever.

There are some interesting ideas about the Witwatersrand invoking means of forming gold deposits which are no longer active that could have formed deposits over scales of hundreds of km.


As an example, I have plotted the size distribution of reported deposits in Nevada (pdf here). It is a graph which should mimic the white hyperbola in the first figure. It might look better if we had a lot more deposits to work from. The smallest deposits on this chart were only about 2,000 ounces--and one of them had already been mined out. I would naturally expect far more accumulations of gold in that size range in Nevada--but for economic reasons, only two have had enough work done on them to define a resource.

Second point is that size isn't everything. There are quality issues to consider as well. For instance, conventional thinking suggests there is little appetite for financing mining operations on gold deposits smaller than 2 million ounces. Anecdotally, however, there is increasing interest in financing small, near-surface oxide deposits because their capex and operating costs are both low, recovery rates are high, and their long-term environmental legacy costs are likely to be low. Similarly, grade affects the economics in a more complex manner than we can capture in the above figures. What might work would be to classify the deposits by grade or type, and create the same type of plot--but that is a project for another day.

Third issue--obviously, the economics of the extraction business don't stay constant. There are technological breakthroughs, making extraction cheaper. Or the commodity price rises. These change the location of the left limb of our hyperbola, making a whole new group of deposits (generally among the smaller of them) economically attractive. But some large, hitherto uneconomic, deposits may become economic as well (I'm not going to name any names).


Wednesday, March 26, 2014

Scale invariance and the "fat tails" problem

A good deal of the statistical description of populations is based on the normal distribution. I think this is because the first things we tend to notice (the variability of sizes of people and animals) tend to have such a distribution. The height of Canadian men averages about 1.74 m, and the probability of variance typically follows a bell curve such that the probability of a man being 2.1 m tall, for instance, is much lower than the probability of being 2.0 m tall. There are well-established physiographic reasons for why people will not be much taller, (or very much shorter, discounting factors such as amputations), so that we can discount the existence of 3.5 m tall men.

One way of displaying the normal distribution was through a normal probability plot, which is a graph in which the vertical axis is scaled so that cumulative probability (for a normal distribution) will plot as a straight line. There is special graph paper you can use, with an appropriately scaled vertical axis, variably called probability paper, or probability plotting paper (pdf). A description of its use with data appears here (pdf).

If we are looking at natural phenomena with a wide variety, it is likely the distribution will be log-normal.

A normal distribution is described well by a mean and a standard deviation. If we plot probability density, we observe a parabola, with the maximum probability density corresponding to the mean.

The concept of the normal distribution was so powerful that we naturally carried the description to describe other phenomena, for which there are no such limits on size. Landslides, for instance, like the current one in Washington state, or earthquakes. Our current understanding of such events is that they exhibit scale invariance, which means that there are normally many more small events than large events, and the frequency of larger events is related to the frequency of the smaller events through their size on a logarithmic scale. In particular, the size-frequency distribution is a straight line on a logarithmic scale.


As the economic value shapes whether or not an accumulation of mineral is considered a deposit, mineral deposits only show scale invariance over a limited range. The numbers of, say 50-oz accumulations of gold in nature are extremely large, but these are very unlikely to be of economic interest. On the other hand, 50-million-ounce accumulations are much more rare, but are far more likely to be economically viable, and are thus more likely to constitute a "deposit". The size-distribution of deposits is controlled by these two contrasting probabilities, and the resulting distribution is log-hyperbolic. The probability density graph appears to be an hyperbola.


Hyperbola, parabola, what's the difference. Well, the differences are slight over much of the probability density plot, except at the tails. Of course, those tend to be the most memorable events (well, at the large tail).


Perhaps this doesn't look too impressive to you. But the differences in the tails can be extreme, especially for the most extreme events. The reason is that although the magnitudes of the slopes of both curves increases as you move away from the centre, in the case of the hyperbolic distribution, the maximum value of the slope approaches the slopes of the guiding lines (the asymptotes), whereas the slope of the parabola increases without limit. The discrepancy in estimated probabilities for extreme events can be orders of magnitude!

This is a possible explanation of the "fat-tails" problem that comes up from time to time in discussing extreme events (recent economic events for instance). IIRC, the failure of Long-term Capital Management had been estimated as extremely unlikely, as the risk model showed a maximum daily loss of $35 million. Losses eventually greatly exceeded the model maximum.

The implications of this distribution is happier for geologists--it means the probability of discovering a large deposit is larger than is frequently assumed.

For instance, this is from what appears to be a Shell-training document (large pdf) on the role of play-based exploration in the decision-making tree (image is on pg 45).


The straight line is the log-normal distribution fit to the observations (squares). The model fit predicts that only 1% of discoveries will be larger than 175.5 million barrels of oil equivalent--but the observed data suggests that about 1.5% of discoveries are greater than about 350 million barrels.

Using the model to estimate the probability of a large discovery probably satisfies the accountants as being nice and conservative, but considering the potential economic importance of individual large discoveries, using the incorrect probability model may create a significant opportunity cost, if it results in an area play being discarded incorrectly.

I know some folks in the oil industry--and they can be a cagey lot, especially about something that influences their business plan. So it wouldn't be unheard of for the above document (as it is publicly available) to be deliberate misinformation. I have made enquiries, but so far no one will admit to knowing what I'm talking about.

Anyway, the play-based exploration idea is something I alluded to last time--but I don't see this entering into the playbook for mining companies until the costs of failure for mining exploration more closely resembles that of petroleum exploration--something that I think is still a few decades away.

Saturday, March 22, 2014

Scale invariance of mineral wealth--the exploration conundrum

I was dreaming when I wrote this. Forgive me if it goes astray.
Part of the reason I started this blog was to work through some ideas. Writing them and seeking comment while they are still forming seems to be an ideal use of interweb pipes.

I have written here and here about scale invariance in gold deposits--mostly on a global scale, using various data sources (pdf), including this one (pdf). What to do with this information?

The most common question is "what is the largest gold deposit left to be discovered?" Unfortunately, the answer is probabilistic. There will be a fairly low probability that the largest gold deposit still to be discovered is larger than the largest found to date. A more meaningful question might be "what is the typical size of a gold deposit that remains to be found?" Nobody seems to be interested in that one. Typical deposits are for other people to find. They are going to find the largest one.

As above, so below. Given sufficient data, the analysis can be repeated for separate structural provinces, or for particular trends. At present, there is limited interest in this approach (pdfs), but it may be because it is not completely clear how to best use the information obtained by the analysis. Mining companies don't really make decisions to investigate a general area on these sort of criteria.

Presently, most mining companies decide to get ground on wholly different criteria. They select a commodity not necessarily based on their expertise, but because the market appears to favour it. They select a locality on the basis of its current popularity (bonus points for recent spectacular discovery), political stability, the ease (or cost) of acquiring properties, their personal interest/familiarity with the region, or the availability of infrastructure. Just check the websites of some junior mining companies.

Oil companies, on the other hand, use this type of data in a process called play-based exploration. "Play" refers to a prospective area, not what the geologists do. The idea is that through studying the distribution of the sizes of known oil deposits within a field, a company will estimate the probability of discovering a pool of oil of a given size, balance that against the probable losses accumulated during exploration, and decide whether or not to proceed. This is entirely different, and separate, from the analysis of any individual prospect within the play.

An analogy within the mining industry would be to estimate the typical size of a gold deposit in a place like Kazakhstan, and using that information to make a decision about whether or not to attempt to look for ground to acquire. The mining industry is not at that point, largely because the costs of failure are nowhere near as high as similar costs in the oil industry.

Oil companies went this route as the costs of dry holes escalated over the past few decades, and they began to lose money on plays, despite having success with individual prospects. 

Wednesday, December 4, 2013

There's no terror like state terror

. . . we study the frequency and severity of terrorist attacks since 1968. We show that these events are uniformly characterized by the phenomenon of scale invariance, i.e., the frequency scales as an inverse power of the severity, . . .
                                             Clauset et al., 2007 (pdf)

As we enter this season of peace, I find myself reflecting on war. And scale invariance.

The work cited above is old, and has been digested for some time. To recap, the frequency of terrorist events varies inversely as the square of the severity (typically measured in casualties)--and this relationship is independent of time selected, targets, weapon type, or responsible group. Even massive attacks, such as the September 11 attacks do not represent outliers, but form part of the statistical continuum of "normal" terrorism.


I've extended this graph to include a few other events.


In this chart, D represents recent estimates of the deaths during the Dresden firebombing, N1 represents deaths from the nuclear bombing at Nagasaki, T represents deaths during one particular firebombing raid of Tokyo, H represents deaths from the nuclear attack of Hiroshima, and N represents deaths during the massacre of Nanking.

We commonly carry out similar analyses for the purposes of risk assessments for natural hazards such as earthquakes. If we know the recurrence interval for small events, we can estimate the recurrence interval of very large events, provided the size-frequency distribution is characterized by scale invariance. We can carry out a similar assessment here. Unfortunately, we don't really know the recurrence interval of an event like the September 11 attack--but let us assume here that September 11 represents the largest terror attack one would expect in any 25-year period.

If so, then the recurrence interval for a Dresden would be 2500 years; for Nagasaki, it would be about 7500 years; for Tokyo, about 10,000 years, Hiroshima 15,000 years; and Nanking, about 50,000 years. I note that all of these events happened in the last century.

It seems likely that these state-sponsored events happen on their own frequency curve, which goes to show that nobody can do terror like the modern State.

Tuesday, November 26, 2013

NRH gold deposits follow-up - still lots more gold out there!

Once again the good folks at Natural Resource Holdings (this time teamed up with Visual Capitalist) have updated their report (pdf) listing gold deposits greater than one million ounces in size.

In earlier postings I discussed briefly the expectations for the size-distribution of gold deposits, using an earlier list published by NRH and historical Nevada as examples. My conclusion was that the size-distribution of gold deposits follows a scaling law over at least a couple of orders of magnitude. There is a maximum size for gold deposits, because hydrothermal cells can likely only be so large before they become unstable and divide into smaller cells, leading to gold of one (natural deposit) being scattered over several discrete (economic) deposits. So how do we count them?

There are minimum sizes for deposits as well, primarily for economic reasons. So our scaling law only seems to be valid over a pretty limited range.


The yellow line is a possible scaling law to describe the size-distribution of gold deposits. Interestingly, its slope is 1 (pink noise), a very common scaling law in physical systems. It is quite different from the slope of 1.5 obtained from Nevada deposits. I'm not sure how to explain this, except that the Nevada deposits are almost exclusively of one type, whereas the global deposits represent all known settings.

As before, I don't expect that we will find many more huge (> 35 M oz) deposits; but there is potential to fill in the gap below the line in the 1 M oz range. From the above graph, we would still expect to find at least 400 more deposits, mostly in the 1 - 3 M oz range.

In reality, the number will likely be higher, as the census would still not be complete. It would be foolish to assume there are no deposits in Antarctica, for instance, even if climate and politics makes their exploitation unlikely. There are also numerous deposits on the seafloor, even if it may be a long time before control systems reach the point where they can correctly distinguish between ore and waste material while more than a km underwater.

All of this suggests that the yellow line needs to be shifted upwards--which opens up the possibility of many more deposits in the >10 M oz category still to be found. No guarantee on costs of all these, sorry.

---
Almost forgot to h/t Otto - although I'm sure I would have noticed this eventually.

Saturday, November 23, 2013

Interpretation of scaling laws for US income

It has been remarked that if one tells an economist that inequality has increased, the doctrinaire response is "So what?"
                                          - Oxford Handbook of Inequality

h/t Bruce Krasting

Social Security online has published a full report on income distribution in America.

Two years ago we looked at the distribution of wealth in America. Today we are looking at income.


There were a total of about 153 million wage earners in the US in 2012, which is why the graph suddenly terminates there.

As we have discussed before, in self-organizing systems, we expect the observations, when plotted on logarithmic axes, to lie on a straight line. Casual observation of the above graph shows a slight curve, which gives us some room for interpretation.

I have drawn two possible "ideal states"--the yellow line and the green line. Those who feel the yellow line best represents the "correct" wealth distribution in the US would argue that the discrepancy at the lower income (below about $100k per year) represents government redistribution of wealth from the pockets of the ultra-rich to those less deserving. Followers of the green line would argue the opposite--that the ultra-wealthy are earning roughly double what they should be based on the earnings at the lower end.

Which is it? Looking at the graph you can't tell. But suppose we look at the numbers. Adherents of the yellow line would say that roughly 130 million people are getting more than they should. The largest amount is about 40%, so if we assume that on average these 130 million folks are drawing 20% more than they should (thanks to enslavement of  the ultra-wealthy), we find that these excess drawings total in excess of $1 trillion. Thanks Pluto!

The trouble with this analysis is that the combined earnings of the ultra-wealthy--the top 100,000--earned a total of about $400 billion. They simply aren't rich enough to have provided the middle class with all that money.

Now let's consider the green line. Here we are suggesting that the ultra-wealthy are earning about twice as much as they should be, and let's hypothesize that this extra income is somehow transferred from the middle and lower classes.

As above, the total income of the ultra-rich is about $400 billion. If half of this has been skimmed from the aforementioned 130 million, they would each have to contribute about $1500.

I expect a heavier weight has fallen on those at the upper end of the middle-class spectrum; but even so, $1500 per wage earner does seem doable. Of the two interpretations, the green line looks to be at least plausible, and we are forced to conclude that those who believe the ultra-wealthy are drawing a good portion of their salaries from everyone else have a point.

But isn't $1500 per year a small price to pay to create a really wealthy super-class?

Paper on causes of income inequality full of economic axiomatic gibberish here (pdf).

Sunday, April 21, 2013

Size distribution of global deposits redux

Same idea as last time, but working from an updated database of 439 gold deposits of one million ounces or more.

Size distribution of global gold deposits on log-log scale. Data from NRH.

The chart is a graph of the number of gold deposits larger than a certain size, with both axes plotted on a logarithmic scale. The point at the upper right tells us that there is one deposit (in this case, the Pebble Deposit) larger than 88.1 M oz (which is the size of the number 2 deposit, Grasberg). The point at the far left tells us there are 438 deposits larger than 1 M oz.

In an ideal scale-invariant system, the points would line up on a straight line. We don't expect to see this completely here for two reasons: 1) not all gold deposits have been discovered; and 2) we don't really have a consistent definition for a deposit. Some deposits are so large that in settings where concessions would be smaller, they might count as several separate deposits. One might argue that all of the gold deposited in a geological province during one ore-forming event should count as a single deposit. If so, then the number of deposits on our chart above would be much smaller, and their average size would potentially be larger.

There is a noticeable break in the slope at about the 2,000,000 ounce size--the number of smaller deposits falls off more rapidly than would be expected given the number of deposits greater than 2 M oz. This may be a reflection of a bias towards larger deposits. In financing discussions over the past few years it has been made clear to me that there is not a lot of interest in deposits smaller than 2 M oz.


We are grateful to Natural Resource Holdings for taking the time and effort to put this list together.
The line in the above plot would suggest that there are still at least 500 deposits of 1 million oz or more still to be discovered. This would suggest that we have discovered only about half of the total gold deposits > 1 M oz in size that there are to be discovered. I would contend that we have a lot more to discover.


As above, with last year's reported distribution in blue.

In the last year, NRH added 200 deposits to their list. These were not all newly discovered or bumped up to this level--clearly, the majority of these additions had been missed the first time. It is reasonable to assume that there are still some deposits in this category that have been missed in the 2012 report, so that the 2013 report will have more deposits, but probably not 200 more. What I don't think will change significantly is the above estimate--that our present discoveries are less than half of what gold remains to be discovered.

Tuesday, February 21, 2012

Scale invariant behaviour in avalanches, forest fires, and default cascades: lessons for public policy

We show that certain extended dissipative dynamical systems naturally evolve into a critical state, with no characteristic time or length scales. The temporal "fingerprint" of the self-organized critical state is the presence of flicker noise or 1/f noise; its spatial signature is the emergence of scale-invariant (fractal) structure.  - Bak et al., 1988 (one of the greatest abstracts ever written!)

1987 saw the publication of an extraordinary paper--one which led to a dramatic change in our understanding of the dynamics of certain kinds of dynamic systems. Most importantly  . . . introduced the concept of self-organized criticality, or self-organization to the critical state--which is a condition neither fully stable nor fully unstable, with a characteristic size-distribution of events (or failures). In the kinds of systems that interest geologists, earthquakes and avalanches were quickly recognized as being SOC systems, and SOC was recognized as the most efficient means of transmitting energy through a system.

Avalanches and SOC

An early computational experiment went like this:  imagine a pile of sand, on which single grains of sand are dropped one by one until an avalanche occurs.  An avalanche occurs when the slope at some local point is greater than a defined value.

If your sandpile is two-dimensional (length and height--imagine a cross-section of a real sandpile), you would have to visualize it as a string of numbers, where each value represented the number of grains of sand stacked at that point. In the figure below, we are only looking at half of the pile, from the midpoint to the edge.


In our simple sandpile consisting of four stacks, a grain of sand of thickness dx falls onto the middle stack. If the difference in heights between this stack and its neighbour x1 in the figure above) exceeds some threshold value n, then one grain of sand would drop from the higher stack onto the lower stack. You would then have to check whether the height of the next stack was now more than  n higher than its neighbouring stack. If so, then another grain of sand would drop down one more stack and so on to the end of the pile.

What happens in a two-dimensional sandpile is that eventually the height of the sandpile is such that each stack is exactly n higher than its neighbouring stack. As a new grain of sand is dropped onto the pile, it migrates along all of the stacks and drops off the edge of the pile.

The behaviour of the sandpile is very simple; but what happens when you move to a 3-dimensional model (I'm counting the height of the pile as a dimension--not all authors describing this problem do so!)? You might expect similar behaviour--that the slope of the pile will increase until a single grain of sand causes a rippling cascade through the entire pile. This doesn't happen, for it would imply that the natural behaviour of the system is to evolve towards a point of maximum instability. In the experiment, the behaviour of the sandpile was much more interesting. The pile built up until it reached a form of stability characterized by frequent avalanches of no characteristic size.



Bak et al. (1987) called this condition of minimal stability the "critical state", and pointed out that as it developed independent of modelling assumptions and external parameters, it arose by self organization--the term "self organized criticality" (SOC) was introduced to describe the process. The characteristics of systems displaying SOC are fractal geometry, and flicker noise (also called 1/f noise).

There are many systems in nature--and increasingly in the human environment--which are similar to the avalanche model described above. Real avalanches, and similar mass sliding events (debris flows in the deep sea, for instance) have been recognized as SOC processes; along with earthquakes, volcanic eruptions, and economic events.

Forest fires were quickly recognized to be characterized by SOC--at least in environments without a lot of active management. Curiously, it quickly turned out that the effects of fire management, at least as practiced in the United States, might have had an effect opposite to that which was desired.

Fire suppression in the United States

“Strange to say, that, obvious as the evils of fire are, and beyond all question to any one acquainted with even the elements vegetable physiology, persons have not been found wanting in India, and some even with a show of scientific argument(!), who have written in favor of fires.  It is needless to remark that such papers are mostly founded on the fact that forests do exist in spite of the fires, and make up the rest by erroneous statements in regard to facts.”   B.H. Baden-Powell


As European settlers spread through what became the United States, they were confronted by an unusual world. Wilderness was something that had to be eliminated so that "civilization" could spread. Forests were to be cut and the land put to the plow. This was more than an economic imperative--it was a moral imperative as well. 


The rapid westward expansion in the 19th century brought railroads, and railroads brought further development and fire. While clearance of the forest was necessary for development, the desire to create a forestry industry based on sustainable harvesting rather than a short-sighted liquidation of old forests was driven by European examples. And thus the American ideas of forestry were transformed by the turn of the 20th century. Forests were resources that had to be tended. And as resources, any fires within them resulted in economic losses.

Fire had been used as a method of maintaining the forest by the native populations--but such a method was far too messy and unpredictable for a modern people--particularly those who looked to the forestry programs of western Europe, where fires were uncommon. The European model worked tolerably well in the eastern forests in North America, where water was plentiful year-round; but this model turned out to be unsuitable for the western forests, the life cycles of which required fire as a controlling element.

Major Powell launched into a long dissertation to show that the claim of the favorable influence of forest cover on water flow or climate was untenable, that the best thing to do for the Rocky Mountains was to burn them down, and he related with great gusto how he himself had started a fire that swept over a thousand square miles. - Bernard Fernow

The forests of the southwestern United States were subjected to a lengthy dry season, quite unlike the forests of the northeast. The northeastern forests were humid enough that decomposition of dead material would replenish the soils; but in the southwest, the climate was too dry in the summer and too cool in the winter for decomposition to be effective. Fire was needed to ensure healthy forests. Apart from replenishing the soils, fire was needed to reduce flammable litter, and the heat or smoke was required to germinate seeds.

In the late 19th century, light burning--setting small surface fires episodically to clear underbrush and keep the forests open--was a common practice in the western United States. So long as the fires remained small they tended to burn out undergrowth while leaving the older growth of the forests unscathed. The settlers who followed this practice recognized its native heritage; just as its opponents called it "Paiute forestry" as an expression of scorn (Pyne, 1982).

Supporters of burning did so for both philosophical and practical reasons--burning being the "Indian way" as well as expanding pasture and reducing fuels for forest fires. The detractors argued that small fires destroyed young trees, depleted soils, made the forest more susceptible to insects and disease, and were economically damaging. But the critical argument put forth by the opponents of burning was that it was inimical to the Progressive Spirit of Conservation. As a modern people, Americans should use the superior, scientific approaches of forest management that were now available to them, and which had not been available to the natives. Worse than being wrong, accepting native forest management methods would be primitive.

Bernhard Fernow, a Prussian-trained forester, thought fires were the ‘bane of American forests’ and dismissed their causes as a case of ‘bad habits and loose morals’. - Pyne (1995).


Through the early 20th century, the idea that fire was bad under all circumstances, and fire control must be based on suppression of all fires came became the dominant conservation ideology. After WWII the idea became stronger still, partially because of the availability of military equipment; but also due to the Cold War mentality. Just like Communism, the spread of fires simply couldn't be tolerated--and it was the duty of America to contain both "red" menaces (Pyne, 1982).


In the latter part of the 20th century, the ideas behind fire suppression once again began to change. The emphasis on "modern" methodologies began to fade, with a preference appearing for restoration of the "old forest" from pre-settler times. Research into the forest had begun to reveal the importance of fire in the natural setting, and that humans had used fire to manage the forest throughout history. Costs of fire suppression had risen dramatically, and the damage done to the forest by the equipment and the methods of fire suppression often exceeded that done by the fires.


Gradually the idea of fire suppression faded, to be replaced by a determination to allow fire to return to its natural role. Major fires in Yellowstone Park in 1988 brought about something of a reversal again in policy, but it was recognized that a century of fire suppression efforts had left the western forests in a dangerous state. Even though fire was to return to that natural cycle, the huge growth of underbrush has created a substantial risk of massive, out-of-control fires. This risk is an indicator of just how unhealthy fire suppression has made American forests.


By comparison, forests in Mexico, where there have been no fire-suppression efforts are far healthier. Fires are more common, but tend to be smaller, due to lack of fuel. 

Fire, water, and government know nothing of mercy. - Latin proverb


Default cascades as avalanches


Economic fluctuations have long been recognized as SOC phenomena. One type of fluctuation that has been recently posited is the "cascading cross default" in which the failure of one entity to repay its debts drives one (or more) of its creditors into bankruptcy, which in turn drives one or more of its creditors into bankruptcy, and so on.


Clearly these default cascades can be of nearly any size. A default may only affect the defaulting institution--or it may take down all institutions in a global collapse. As a conceptual model, the sandpile automaton of Bak et al. (1987) is a pretty good representation--the key difference being that each individual stack in the economic sandpile is actually connected to a large number of other stacks, some of which are (geographically) quite distant. For instance, the failure of Deutsche Bank would likely put stress on Citigroup. Would it cause it to fail? Perhaps. We would model this by assigning a probability of failure for Citigroup in the event of a default by DB. And we would have to do this for all relationships between the different banks.


But we need conditional probabilities--because it may be that DB's failure alone wouldn't topple Citigroup. But suppose it topples ING, and Credit Suisse, and Joe's Bank in Tacoma, and Fred's Bank in Springfield, and Tim's Bank in Akron, . . . and many others, all of whom owe money to Citigroup. Then it might fall. So apart from having tremendous interconnectivity, with each bank connected to many others, there is also tremendous density of those connections, all of which would appear to make the pile very unsteady. 


Instead of dropping grains of sand one at a time on the same spot, multitudes of debt bombs are dropped randomly on the pile of financial institutions, provoking episodic failures. What might we expect of their size distribution?

The experiment as I've described is too difficult to set up on my computer, mainly because I don't know how to establish the probabilities of failure for all of the various default chains that may exist. Furthermore, the political will to prevent financial contagion, although finite, is unmeasurable. Luckily we don't have to run the model, as it is playing out in real life.

Paper now primed to burn


We have lived through a long period of financial management, in which failing financial institutions have been propped up by emergency intervention (applied somewhat selectively). Defaults have not been permitted. The result has been a tremendous build-up of paper ripe for burning. Had the fires of default been allowed to burn freely in the past we may well have healthier financial institutions. Instead we find our banks loaded up with all kinds of flammable paper products; their basements stuffed with barrels of black powder. Trails of black powder run from bank to bank, and it's raining matches.

References

Bak, P., Tang, C., and Wiesenfield, K., 1987. Self-organized criticality: An explanation of 1/f noise. Physical Review Letters, 59: 381-384.


Pyne, S. J., 1982. Fire in America: A cultural history of wildland and rural fire (cycle of fire). 

Wednesday, January 11, 2012

Scale invariance and the scaling laws of Zipf and Benford

Scaling laws have been empirically observed in the size-distributions of parameters of complex systems, including (but not limited to): 1) incomes; 2) personal wealth; 3) cities (both population and area); 4) earthquakes, both locally and globally; 5) avalanches; 6) forest fires; 7) mineral deposits; and 8) market returns. Several years ago one of my students showed that various measures for the magnitude of terrorist attacks also observed scaling laws.

The general prevalence of scale invariance in geological phenomena is the reason for one of the first rules taught to all geology students--every picture must have a scale. The reason for this is that there is no characteristic scale for many geological phenomena--so one cannot tell without some sort of visual cue whether that photo of folded rocks is a satellite photo or one taken through a microscope--whereas one can make such a distinction about a picture of, say, a moose.

Numerous empirical laws (by which I mean equations) have been developed to describe the size-distribution of scale invariant phenomena. Most of these empirical laws were developed before the idea of scale invariance was well understood. One famous example is the Gutenberg-Richter law describing the size distribution of earthquakes.

Another statistical law, Zipf's Law, describes the relationship between size and rank. For cities, for instance, the largest city in a country will tend to have twice the population of the second-largest city and three times the population of the third. More formally, the relationship is stated as follows:


for a distribution where C is the magnitude of the largest individual in the population, y is the magnitude of an individual with rank r, and k is a constant which characterizes the system--but is commonly about 1.

If we plot rank vs size on a log-log plot, the graph should approximate a straight line with a slope of -1/k.

For instance, a plot of city size vs rank for US cities appears as follows:


Data sourced here.

From the same data source we find a similar relationship when city size is determined from area rather than population:


In the first plot we obtain a value for k very close to 1. The plot where cities are ranked by area is not as clear, but this may be due to the arbitrary nature of city limits. To characterize either of the above plots by Zipf's law is fairly straightforward--draw the straight line from the top-ranked city that best follows the line of observations.

A recent article published in Economic Geology argues that mines in Australia follow Zipf's Law. In summary, not only do the known deposits in Australian greenstone belts follow Zipf's law fairly closesly, but the early estimates of as yet undiscovered gold projected from early Zipf's law characterizations compared favourably with the amount of gold eventually discovered.

The weakness that I see with this approach is that it is all rather strongly dependent on the estimates of the size of the largest deposit. In any given area, it will be true that the largest known deposit will be well studied, but history has shown us that mines can be "mined out" only to be rejuvenated by a new geological or mining idea.

I am unable to reconcile the size-distribution data from the Nevada mineral properties presented recently with Zipf's Law, although they do seem to follow some sort of power law.


Using the straightforward approach to a Zipf Law characterization gives us the red line, which appears to show that there have been far too many gold deposits of > 1/2 million ounces for the largest mine. To reconcile the known gold discoveries with Zipf's Law (green line), someone would need to find a 100-million-ounce deposit (if that doesn't get explorers interested in Nevada, I don't know what will)!

I, however, would prefer to use the interpretation of the above data developed in our last installment--that there is a power-law relationship between size and rank, but this relationship breaks down for the largest deposits because there is some sort of limit to the size of gold deposits (at least near the Earth's surface), although I do not know what the limiting factor(s) would be.

Another scaling law is Benford's Law, which is an empirical observation that the first digits of measurements of many kinds of phenomena are not random. In particular, the first digit is a '1' approximately 30% of the time; '2' 18% of the time, '3' 12% of the time and so on, with the probability descending as the number increases.

First       Probability of
digit        occurrence

1            0.30103
2            0.176091
3            0.124939
4            0.09691
5            0.0791812
6            0.0669468
7            0.0579919
8            0.0511525
9            0.0457575


So if you had a table of the lengths of every river in the world, for instance, you would find that approximately 30% of the first digits were '1'--rivers with lengths of 1 904 km, or 161, or 11 km would fall into this category.

Furthermore, it doesn't matter what units you used--if you had measured the river lengths inches, you would observe the same relationship. The reason for this is that if you were to double a number which begins with '1', you end up with a number which either begins with '2' or '3'. Hence, the probability that the first digit is either '2' or '3' must be the same as the probability of it being '1'. In the table above, we see this is the case.

It isn't only natural phenomena that are characterized by Benford's Law. It has also been used as a tool to identify fraud in forensic accounting.

The deposit-size data from Nevada seem to conform to Benford's Law.


And if I convert the deposit size from ounces to metric tonnes . . .


So although Zipf's Law doesn't describe Nevada gold deposits well (at least at present), Benford's Law does.

Saturday, January 7, 2012

Gold, part 2: Is there a maximum size for gold deposits?

In our last installment, I presented a graph showing the size distribution for global gold deposits of greater than one million ounces. In it I tried to estimate the slope of the relationship between the size of deposits and their ranking, in terms of size,  in the hopes that the slope had some predictive power for the deposits that are yet to be found.


Two suggested scaling laws for the size-distribution of gold deposits (global).

Once again, the interpretation of these graphs is the rank, (in size, less one) of any deposit is the abscissa, and size is the ordinate. The reason for subtracting one from the rank number is that the largest deposit shown on the graph is actually the second-largest deposit in the state--and there is one deposit larger.

In our last installment, we assumed that the blue line was the better representation of the scaling law for gold deposits. Today we explain why the yellow line may be the correct answer, and that it does not mean we can expect to find multi-billion ounce deposits of gold (at least nowhere near the Earth's surface).

- - - - - - - -

The Earth system consists of myriads of local interacting subsystems. Intuitively, we might not expect the overall effects of these to merge into a background of white noise, we find instead that highly ordered structure arises on a variety of scales ranging up to that of the globe.

A simple scaling law for the size-distribution of gold is an example of red noise (or pink noise, depending on the slope). The observed power-law is a characteristic of a system at a state of self-organized criticality (SOC), as is nicely outlined here. In essence, the scaling law we observe in the size-distribution of gold deposits due to self-organization in the geological processes which control the reservoir size of crustal fluids which contained the gold, and possibly also the fracturing process which preceded the emplacement of the gold in the rocks.

Today we look at the size-distribution of gold deposits in Nevada.


The above graph was plotted using the data from the Nevada Bureau of Mines and Geology review of its mineral industry for 2009. There were 191 (unambiguous) significant deposits of precious metals for which I have combined the most recent mineral resources (all categories) plus any pre-existing historical production. I only counted gold ounces--and freely acknowledge that some of the mines in the above chart were probably better described as copper or silver mines--and treated all categories (proven and probable reserves, measured and indicated resources, and inferred resources) equally. If you feel the methodology is flawed you are invited to use your own.

We can compare the current size-distribution of gold deposits to the size-distribution of gold deposits in the Carlin Trend in 1989 (Rendu and Guzman, 1991).


Remarkably, both sets of data appear to be described by a straight line of constant slope, at least between for deposits between about 100,000 ounces and 10 million ounces in size.


During Nevada's "maturation" as a gold province, the scaling law describing the size-distribution of gold deposits remained constant over two orders of magnitude in size. The slope of these lines is about 1.5, placing the scaling law exponent between pink noise and red noise.

When we look at the figure on the top of the page, the blue line has a slope < 1, whereas the yellow line has a slope of about 1.5. For this reason, I propose the yellow line to be a better representation of the scaling law for the global deposits. The reason I first leaned towards the blue line was due to insufficiency of observations.

For comparison, if I only looked at deposits in Nevada greater than 1 million ounces, I would not be as confident describing the size-distribution with the yellow line.

SOC theory would seem to tell us the entire distribution should be characterized by a power law. Why not gold deposits?

In nature, there are limits. Infinity is not an option. Earthquakes are recognized as SOC processes, yet they have a maximum size, as the capacity for earth materials to store and transmit strain is finite. Similarly, we would expect there to be an upper limit for the size of crustal reservoirs of gold-bearing fluids. The result is that the largest gold deposit we find is much less than we would predict on the basis of our observed power law.

This explanation does not explain why there also appears to be a deficit in small deposits. For this the reason is economic. Under the current reporting regime (NI 43-101), gold in the ground cannot be considered a "deposit" unless it is reasonable to expect it to be exploited profitably. The requirement for economic exploitability will exclude many small--well, since they are not deposits, let's call them "collections"--of gold. Additionally, many company geologists will ignore such collections as soon as it becomes clear they are unlikely to become a deposit.


So it's up to these guys! (sorry about the quality--this is a point-and-shoot photo scanned way back in the '90s). He's using a rubber cut-out from an inner tube as a pan. This site is a thrilling walk north of Asanta village, western Ghana, on land almost certainly on a concession held by Endeavour.

References:

Hronsky, J. M. A., 2011. Self-organized critical systems and ore formation: The key to spatial targeting? SEG Newsletter, 84, 3p.

Nevada Bureau of Mines, 2010. The Nevada Mineral Industry 2009. Special Publication MI-2009. http://www.nbmg.unr.edu/dox/mi/09.pdf, accessed today.

Rendu, J. M. and Guzman, J., 1991. Study of the size distribution of the Carlin Trend gold deposits. Mining Engineering, 43: 139-140.

Thursday, December 29, 2011

Gold

Perhaps you haven't noticed, but at The World Complex, we like gold. A lot. Not too long ago we ran an article about historical gold production, with some estimates of future production for gold. Today we will take a closer look.

It turns out that my main professional activity is exploring for gold, and yet that isn't why I spend so much time thinking about it.


Gold in a rubber pan, from an artisanal mining operation on a ridge in central Ghana.
The photo is about 4 cm across IIRC.

I began this blog to investigate application of mathematical methodology to geologic problems. Over the last year in particular, I have put increasing effort into using the same tools to look at economic problems. Yet ask geologists why they spend so much time looking for gold, and you will get many answers, none of them true.

In terms of exploration effort, gold is the most important mineral on the planet. Approximately 50% of all money spent on non-fuel mineral exploration during the last fifteen years was spent on gold exploration.


Sources here, here, and here (unfortunately the last two require a paid subscription).

I think people are mystified by the intense effort to find gold. Comments on gold at popular economic sites are polarized between those who find gold to be the most important commodity in the world and those who think it useless.

What does the USGS say about it?
Gold is by far the most explored mineral commodity target among those analyzed and the principal target for about 580 sites in 1995 and 1,800 sites in 2004. Gold’s popularity can be attributed to its demand in aesthetic and technological applications, its profitability (in terms of revenue minus costs), its widespread geological occurrence in relatively small deposits, some of which can deliver a high rate of economic return on investment, and its high price per unit weight.   (Wilburn, 2005)
To paraphrase--it's beautiful; profitable; and it scattered in small deposits worldwide. Rather than delve into the economic importance of gold, for now let us have faith that the market allocates investment as it does for a logical reason, whatever it happens to be. Based on exploration effort, gold is the most important non-fuel mineral in the world.

Copper is the second most important mineral--and at about 20% of global exploration expenditure, a distant second at that. Silver doesn't even appear on the scale.

Now let us look at the size-distribution of some known gold deposits. I will use the same approach used here to look for evidence of self-organization in the size of gold deposits. This article suggests that gold deposits exhibit scale-free behaviour. Let's investigate.

NRH Research has published a list of 296 deposits consisting of one million or more ounces of gold here. Although I haven't checked all the numbers, the ones I did check show some minor differences--mainly due to additional work carried out on the project since the date of the report. The only substantial issue is that for some of the deposits checked it appears that NRH has summed reserves, measured and indicated resources, and inferred resources; but not always. Rather than go through and update all 296 deposits, I have decided to use the data as is. Caveat emptor.


Size distribution of global gold deposits. Data from NRH report. Lines from my imagination.

The Chinese article cited above suggests that gold deposits are scale-free, meaning that if we plot them as we did the wealth distribution last week, we should see a straight line: however, we won't observe this because we have failed to discover all gold deposits. As a result, the graph will be concave downwards, reflecting possibly large numbers of yet-undiscovered gold deposits, particularly small ones.

On our graph is indeed concave downwards. The blue and yellow lines represent potential scenarios by which we may estimate how much more gold there remains to be found. Both suggested lines cover approximately one order of magnitude, however I believe the yellow line to be unrealistic, as it would suggest that there remain many very large deposits to be found--and projecting it all the way to the right would suggest one deposit larger than a billion ounces remains to be found!

The blue line would not permit much more in the way of large deposits, but would suggest that a great many smaller deposits are yet to be found. For instance, in the NRH report, there are 100 deposits larger than about 4 million ounces. From the blue line we would infer about 300 deposits larger than 4 million ounces. Two hundred more to go!

It may seem fantastic, but one thing to remember is that virtually all gold deposits ever discovered outcrop at surface. We are only just learning how to discover deposits that don't outcrop at surface. There's a lot of underground that hasn't been explored yet, so it's be fair to say the planet Earth is still an immature exploration district.

Sunday, December 18, 2011

Self-organization and wealth distribution

The question of wealth inequality has been making headlines, in everything from the Occupy Wall Street movement and their decrying the wealth of the 1%, to discussion in the Republican Presidential-Candidate Popularity Contest currently ongoing in the US.

There have always been voices clamouring for equal wealth for everyone, but the real world doesn't work like that. Wealth inequality doesn't seem particularly unfair given the inequalities in natural abilities and access to capital or resources. Intuitively, it seems that the distribution of wealth in society will follow a power-law distribution. A power-law distribution is one in which the observations show a 1/f distribution, as described in this article.

Recent modeling studies suggest a 1/f distribution over most of the population, but wealth distribution becomes exponential near the tails. The model distribution is described as Pareto-like, with a relatively few super-wealthy floating over an ever-changing middle class.

So wealth inequality should be expected in any society, no matter how even the playing field. The skills necessary to navigate through the economy are not evenly distributed. Some individuals play better than others. Therefore, some individuals will be wealthier than others. Let's take a look through some public data and see if we can recognize a power-law distribution.

According to Wolff (2010), the breakdown of wealth among different quintiles (and finer groups) is:

Fraction of                        Fraction of 
population                         wealth

Lowest 40%                      0.2%
40 - 60%                            4.0%
60 - 80%                          10.9%
80 - 90%                          12.0%
90 - 95%                          11.2%
95 - 99%                          27.1%
99 - 100%                        34.6%

Given that the wealth of Americans in 2007 was reported by the Fed to be $79.482 trillion, and the population of the US at that time was 299,398,400 (roughly), we can plot a logarithmic graph of individual wealth vs population to check for self-organization in wealth distribution.

In order to do this, I have estimated that the wealth of the individual in the middle of each group to have the average wealth of the group. Based on past experience, this estimate will tend to be biased--however given the number of orders of magnitudes on the resulting graph, the errors are so small as to be unnoticed.


To interpret this graph, consider the first two points--they suggest that roughly 80 million people have less than about $2,500, and about 130 million people have less than about $75,000. Most of the data appear to lie along a line of fit, but there are a few exceptionally rich individuals, including some on the Forbes 400 list, who plot far above the line. 

Also note that "the 99%" includes people that have about $8 million in assets.

The observed distribution agrees somewhat with the models described above--a few super-wealthy lording it over the rest. However, there is a significant difference between our observed slope and the slope of the models--the models suggest a slope for the straight-line of about 2. On our graph, the slope of the straight line is over 4 (meaning four orders of magnitude in wealth over one order of magnitude of population).

On our graph, roughly 290,000,000 people have less than $1 million, and 29,000,000 have less than $100. Seems a tad steep. With a slope of 2, the 29,000,000 would have less than $10,000.

If the wealth of the entire population were described by a 1/f distribution, then the richest American would have a wealth of only about $1.5 million. We here at the World Complex think it would be difficult to manage that summer home in the Hamptons with such a paltry sum.

The Ebert and Paul (2009) paper linked to above attempts to explain the semi-permanent nature of the super-rich. The super-rich have benefited from leverage in the system, and remain at the top due to the ongoing access to greater leverage than is possible for the average citizen. 

A poor geologist like me can only wonder--what happens when leverage becomes wealth-destroying rather than wealth-enhancing? Unfortunately, the answer we are seeing is that the super-rich get bailed out of their losing positions by everyone else.

And here we come to the question of fairness in the system. A fair system with an even playing-field will always result in inequalities--but even extreme inequalities will be tolerated to the extent that the system is perceived as being fair. In the past, during times when the system was fair(er), people tended to respect that someone had earned money and was able to enjoy the fruits of success. Under the present system, there is a widespread and growing skepticism that unusually wealth individuals have obtained their wealth not through production of wealth but through gaming the system and even stealing wealth from those lower down the socio-economic ladder.


Lastly we see the same plot as above, but with the estimated and "ideal" wealth distributions as determined from a series of nationwide interviews with over 5500 respondents reported in Norton and Ariely (2010).

Clearly most Americans thought the system was more equitable that was actually the case, and interestingly, they seemed to wish the system were more equitable still. I would like to point out that the "ideal" distribution is actually mathematically impossible (the third and fourth quintiles had equal wealth), which seems fitting. 

In an ideal world, according to the survey, only 10 million Americans would have less than $100,000 in assets, and no one would have as much as a million.

Unfortunately the survey neglected to ask respondents what they felt the wealth of Mssrs Gates and Snyder (no. 1 and 400 on the Forbes 400 list) should be in an ideal world, which might have been very interesting.

Sunday, May 15, 2011

(Scaling) laws of life and death

Another blast from the past*

One of the enduring problems in historical geology is the relationship between speciation and extinction. Geological history is punctuated by episodes of mass extinction, when in response to tectonic, climatic, or even astronomic events, large numbers of species become extinct in a short period of time. What happens in the aftermath?

One idea is that with the Earth denuded of many lifeforms, there are a large number of ecological niches "up for grabs" by the first applicant. Natural selection may select for those organisms which have undergone morphological and/or behavioural changes that more efficiently exploit the opportunities of the vacant niche. The logical consequence is that speciation should increase rapidly after a mass extinction.

One problem with this idea is that natural selection is a mechanism for culling away the unfit--it is not a mechanism by which innovation occurs. Innovation has been linked to random mutation, although occasionally papers appear claiming that the mutations are not random. The Lamarckian idea of directed mutation in nature has been discredited.

We might say this is an existential problem, for just as one may contemplate one's own mortality, one may contemplate the mortality of the human race. Are there predictable laws of extinction, and are we governed by them?

In pondering this issue we are not helped by a belief in our own exceptionalism. So for the duration of this essay, let us consider merely the idea of extinction of any number of species, and only afterwards ponder its relevance for ourselves.

Extinction (for our purposes) represents termination of a species.

What can we say about the processes of speciation and extinction? Are they governed by the same dynamics?

Spectral power graphs for extinctions and originations of marine 
families over the last 500 million years. From Kirchner (2002).

The Fourier power spectra for rates of extinction and origination of fossil marine families provides insight into the dynamics of extinction and evolution. On the graph above, the extinction power spectrum is relatively constant, with undulations. The highest frequency undulation (the last wiggle on the left) is Raup and Sepkoski's (1984) "death star" peak--the extinction peak at 28 million year intervals attributed to the periodic approach of a neutron star to our solar system.

The graph for originations shows a steady decline with increasing frequency, so that at higher frequencies (shorter timescales) the rate of originations falls below the rate of extinction. Kirchner (2002) used this behaviour to infer that the rate of originations, especially at higher frequencies (periodicities less than 25 million years) was lower than the rates of extinctions; implying that there was a limit to the rate at which innovations could appear in the fossil record.

Although a correct observation, I believe the explanation for it is incorrect. The significance of the decline in rate of origination with frequency (note the nearly straight line of best fit on a log-scale) is that originations have a scale-invariant character, whereas extinctions (horizontal line of best fit) are random.

Over geologic time, rates of originations have to be at least as high as rates of extinction, or all life would be extinguished. We see that on long time intervals, rates of originations are higher than rates of extinction.

Scale invariance or random chance?

Scale invariance is a common characteristic of geological systems (Turcotte, 1997), and has been observed in such diverse phenomena as earthquakes, volcanic eruptions, and climate change. Such behaviour can be clearly demonstrated from the power spectra of some geological record. When projected on a log-log plot, the best fit of the power spectrum is a straight line of constant slope. The slope of this line is called the scaling exponent, and can be related to the fractal dimension which characterizes the size-time distribution of events.

Discrete scale invariance is a weaker form of invariance, where the scaling is not apparent at all frequencies, but only for a certain range of frequencies (Sornette, 1998). Discrete scale invariance is only described by a complex fractal dimension, the imaginary part of which is a simple function of the discrete scaling exponent (Saleur et al., 1996). Such behaviour is shown to exist in a wide variety of conditions, including diffusion, fracture propogation, fault rupture (in time), hydrodynamic cascades, turbulence, the Titus-Bode law and gravitational collapse of black holes (Sornette, 1998).

Thus, spectral power (P) varies as a function of frequency (f) so that for an arbitrary change f → λf there exists a number μ such that P(f) → μP(λf). This is a homogeneous function encountered in the theory of critical phenomena (Bak et al., 1988), and is solved by a power law P(f) = Af**(-α), where α = -log(μ)/log(λ). Power laws are the “fingerprint” of scale invariance, as the ratio of P(f):P(λf) is independent of f. Thus the relative spectral density is a function of the ratio of the two frequencies, and this property is the fundamental one which associates power laws to scale invariance, self-similarity, and self-organized criticality (Bak et al., 1988).

Fractal analyses of the scaling behaviour can be used to provide more information about the dynamics of speciation and diversification. The scaling exponent is the slope of the best fit line fitted to the log-log power spectrum. There is a correlation between the size-time distribution of events and the scaling exponent. If this slope is ~0, then the the distribution of events through time resembles white (i.e., random) noise.

Importantly, there is no trend observed in the power spectrum for extinctions. The implication to be drawn from this observation is that extinctions are randomly distributed, both in size and in time. They are not governed by processes at all like those that control diversification. The apparently random nature of extinctions is something of a mystery, as it would suggest that extinctions are not related to originations (new critters outcompeting the old). Nor does it fit with extinctions being related to large tectonic, climatic, or external events (bolides), all of which are believed to be systems at a state of self-organized criticality (Bak et al., 1988), and show a measured increase in power with decreasing frequency. Randomness in extinctions may imply that they are dominated by gambler's ruin.

By contrast, the power spectrum for the originations undulates about a line with a constant negative slope on the log-log graph. The slope of the eyeballed line of best fit (don't have access to the real data) is about -0.9 (i.e. a = 0.9). This is consistent with a system at a state of self-organized criticality.

Natural systems displaying self-organized criticality (SOC) are known throughout the geological realm. Tectonic and volcanic activity shows such a distribution, as does the distribution of large climatic disturbances. From the data analysis of Kirchner (2002), it is unclear whether the fingerprint of SOC arises from external influences or is an internal character of the evolutionary process.

From a geological perspective, it is natural to assume that SOC is imprinted on evolution by environmental processes. But scaling laws are observed at the level of proteins (Unger et al., 2003) as well as at the gene- and species level (Harrada et al., 2011), suggesting that SOC is inherent in life itself.

Bak, P., Tang, C., and Wiensenfeld, K., 1988. Self-organized criticality. Physical Review A, 38: 364-374.

Bonnet, E., Bour, O., Odling, N. E., Davy, P., Main, I., Cowie, P., and Berkowitz, B., 2001. Scaling of fracture systems in geological media. Reviews of Geophysics, 39: 347-383.

Erwin, D. H., 1998. The end and the beginning: recoveries from mass extinctions. Trends in Ecology and Evolution, 13: 344-349.

Herrada, E. A., et al., 2011. Scaling laws of protein family phylogenies. http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.4540v2.pdf

Kirchner, J. W., 2002. Evolutionary speed limits inferred from the fossil record. Nature, 415: 65-68.

Kirchner, J. W. and Weil, A., 1997. No fractals in fossil extinction statistics. Nature, 395: 337-338.

Raup, D. M., and Sepkoski, J. Jr., 1984. Periodicity of extinctions in the geological past. Proceedings of the National Academy of Sciences 81 (3): 801–805.

Saleur, H., Sammis, C. G., and Sornette, D., 1996. Discrete scale invariance, complex fractal dimensions, and log-periodic fluctuations in seismicity. Journal of Geophysical Research, B101: 17,661-17,677.

Sepkoski, J. J. Jr., 1993. Ten years in the library: new data confirm paleontological patterns. Paleobiology, 19: 43-51.

Sornette, D., 1998. Discrete-scale invariance and complex dimensions. Physics Reports, 297: 239-270.

Unger, R., Ulierl, S., and Havlin, S., 2003. Scaling law in sizes of protein sequence families: from super-families to orphan genes. Proteins, 51 (4): 567-576. doi:10.1002/prot.10347.



* I wrote this article nearly ten years ago, which is why the references are so dated.Updated recently just for you.