Dust flux, Vostok ice core

Dust flux, Vostok ice core
Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001).

Friday, April 18, 2014

Are Kitco gold experts contrary indicators?

In response to recent debate, we ask the titular question (data originally from this IKN post).

I don't think so, although there is a very slight trend to that effect.

To test the idea, we plot the Kitco bearish and bullish opinion percentages against the performance of GLD the following week.

First up--the bulls.


I've plotted it the same way as the last time, although perhaps an argument can be made that I should switch the axes. There is a slight negative correlation, which would favour Otto's assertion that the experts are contrary indicators. But with r = -0.113, it is a weak correlation. By comparison, the correlation between bullish opinions and the previous week's performance of GLD was above 0.7.

And now the bears . . .


This time there is a weak positive correlation (r = 0.17), which favours the contrary indicator idea (high numbers of bears are followed by higher gold price)

At least since the beginning of September, 2013 (at which point gold was about $100 higher than at present).

Then I tried increasing the difference--maybe those Kitco experts are contrary indicators for GLD behaviour farther in the future.



If we compare the performance of GLD two weeks after the experts' prognostications (i.e., GLD's performance during the second week, not over the entire two week period), we get a reasonable correlation (r^2 = 0.21) - but this seems to suggest that you could bet along with the experts as long as you delay a week. So the current bullish posture favours a gold decline not this coming week, but the week after.

Given that r^2 was about 0.5 for the lagging indicator, that is still the favoured hypothesis. So given Kitco's bearish stance, your best bet is still to go back in time a week and short gold.

It does occur to me that a longer series, that covers a period where gold was performing well over a sustained period, may give a different result.

Tuesday, April 15, 2014

You may have what it takes to be a Kitco gold expert!

I know you've always dreamed of it. But you are probably asking yourself . .  how could I ever develop the wisdom and insight to be a Kitco gold expert? It must take years . . . no, decades . . . of intense study to develop the necessary mental acuity.

Anyway, Otto at the IKN blog claims that the Kitco gold experts are as useful as monkeys with darts. I say they are less useful. Let's investigate, using the table of data helpfully posted at the above link.


The shotgun approach to gold forecasting

The red dots are a scatterplot of % of bearish experts vs the weekly performance of GLD; the blue dots are % bullish experts vs weekly performance. It looks like monkeys with a dartboard.



But when we compare the expert consensus against the previous week's behaviour of GLD, we see a different pattern. On the week following a falling price in GLD, most of Kitco's experts are bearish. When GLD rises, most of Kitco's experts become bullish the following week.

Kitco's experts are a lagging indicator.

If you want to make use of them in your investment planning, you should first invent a time machine . . .

Sunday, April 13, 2014

Gold stocks down more than gold

It's a bit surprising. Gold isn't doing all that badly, but the stocks are really down. In the past, this has usually been followed by a hammering of the gold price. Falling gold stocks has historically been an accurate indicator of falling gold price in the past, and will continue to be until the time it isn't. Is that time this time? I'd bet against it, because I haven't sold all mine yet. I'll let you know the moment it happens.

I think that the market is worried that gold could perform like it did last April, and there aren't enough other buyers to take up the slack.

Monday, April 7, 2014

Reconstructing phase space

I have been reconstructing phase space portraits to study the dynamics of complex systems at various times on this blog. But I still haven't conveyed why I think the method is as powerful as it is. 

We find ourselves studying a complex system, but we know very little about it except for some observations. How we proceed depends to some extent on the model we have in our minds of how the system might work. For this analysis, I am assuming that there is some series of differential equations that will describe the evolution of the system through time. I will also assume that we have no idea what those equations are.

The dynamics of the system may be represented by a series of vectors in a two- or higher-dimensional space, an example of which is depicted below.


We cannot perceive the vectors directly--all we can do is observe a trajectory of the system, and try to infer the pattern of vectors that will give rise to it. In the above cylinder, I have drawn a couple of trajectories that each represent the forward time evolution of the system from the initial condition (the red dots). Notice that although the two dots being close together, their trajectories rapidly diverge.

The system depicted above represents the phase space for a damped pendulum, but could just as easily represent the phase space for the price of a particular gold stock, where one of the axes represents the share price, a second axis represents the market consensus forecast of the future price of gold, and the third axis represents the market consensus of future costs. The efficient-market hypothesis tells us that the information plotted on the other axes is already be embedded in the price. If so, then we should be able to use the price alone to reconstruct the dynamics of the price-gold price-cost system. The method used to reconstruct the phase space is a process of unfolding this extra geometric information from a single time series.

I'll illustrate the approach using a well-known example--the Lorenz attractor. It is challenging to discuss three-dimensional objects in a 2-d medium. To overcome this somewhat, I've made a few screenshots of different projections of this attractor (you can play with it here - the page is in French, but scroll down and you can play with the butterfly). 


Different two-dimensional projections of the same three-dimensional object

Another nice place to play with this function is here. At this site, you can grab and rotate the figure as it is being plotted. You will probably want to modify the number of points to 2000 (seems to be the maximum).

Hopefully by this point, you can see why this function is sometimes described as being in the shape of a butterfly's wings.

We can build our own plots of the Lorenz function. 


This is one that took about two minutes in excel. Although I've only plotted x vs y, please note that calculating z for each time step is a requirement, as it is needed for all the subsequent calculations of x and y.


Imagine that this function represents the state space of the price of a gold stock (on one axis), with the market's consensus estimates for the future price of gold and the future costs to the company on the other two axes. We are in the position of trying to reconstruct the dynamics depicted in the above diagrams. However, the only data we have is price. We don't know the future price of gold. We don't even know the market consensus for the future price of gold--we can only sample a limited number of blogs, and they all seem to say that gold will soon reach $50,000 per ounce (or $200 per ounce, if you prefer).

How do we reconstruct the essential dynamics of the system from a single time series? Suppose that instead of having all the values for x, y, and z, we only had the values for x. The methodology is described starting here. We can plot our time series against either an estimate of its time derivative (time derivative method), or against a lagged copy of itself (the time delay method). 

The time delay method is generally preferred because the errors tend to be smaller (although for financial time series, the errors are so small that perhaps they won't matter). The choice of a lag will influence the usefulness of the plot. If the lag is zero, then the plot will consist of a single diagonal line. If the lag is very small, the plot will only deviate slightly from a diagonal line.


Increasing the lag gives us a more useful plot. 


This plot is very similar to the x vs y plot shown above and captures the essential dynamics of the 3-dimensional graphs shown higher up. We would state that this reconstructed state space is topologically equivalent to the x vs y plot above. 

This exercise is the equivalent of taking the price data alone and reconstructing the phase space that shows the relationship between the price and the future expectation of the gold price (or perhaps estimate of future costs). This is possible because all of the information in the expanded system is embedded within each individual time series. Consequently, the series of observations of price contain the information to reconstruct the geometry of the system, even though we don't necessarily know what the additional axis (axes) is (are).

As an aside, if you try this using y (from the Lorenz function), you will get much the same result. However if you only use z, it doesn't seem to work. As an exercise, dear reader, see if you can tell me why that should be the case.

Increasing the lag further causes distortion.


If the lag is too large, the graph loses its coordination.



Neat! But not so easy to interpret. The choice of a lag is an important one, as it determines how well the geometry of the system is represented in your reconstruction. There are formal prescriptions for selecting a lag, with Abarbanel strongly favouring the use of mutual information over the first minimum of the autocorrelation function. However, finding the first minimum of the autocorrelation is a lot easier.

Wednesday, April 2, 2014

From the small to the big: earthquakes, avalanches, and high-frequency trading

I've been talking about scale invariance a lot lately. I became interested in the topic quite a few years ago in the context of geological phenomena like earthquakes and avalanches. The Gutenberg-Richter law describing the size-frequency relationship for earthquakes was one of the first natural laws based on scale invariance, but interest in the topic really picked up with the Bak et al. paper in 1987 (pdf - may only be a temporary link).

The cause for this relationship is still foggy, as is the physical mechanism between the small and large earthquakes. The best proposed explanation is that the scale-invariant distibution of events allows for the most efficient flow of energy (and information) through the system (but it isn't clear why that should be so).

So back in the early '90s I was estimating recurrence intervals estimates for certain hazardous events and I started trying to work out a methodology for detecting scale invariance in the geologic record. Using the Gutenberg-Richter Law, you can estimate the likelihood of a large earthquake in an area based on the number of small earthquakes. There were interesting implications for areas where the recurrence interval of large earthquakes is longer than the local recorded history (as in much of Canada). At the time, there were seismic hazard maps produced by the USGS which showed significant earthquake risk in zones which mysteriously ended right at the Canadian border.

One of my classmates in my undergrad days (we're back in the 80s, now) studied the correlation between microquakes and fluid injection at oil extraction operations in southwestern Ontario. The oil companies were surprisingly cooperative until they understood the point of the research, after which they started to withhold data.

And here is the mystery. The principle of scale invariance in earthquakes would suggest that increasing the number of small earthquakes should increase the number of large earthquakes at least in the short term. Yet our understanding of the dynamics of earthquakes tells us that lubricating the fault should allow stresses to be relieved through the small earthquakes, which in the long-run should reduce the chance of a large quake in the longer term. (This idea has been proposed at various times over the past fifty years, but for various obvious reasons, it has never been deliberately pursued).

By the early 2000s, other geophysicists (notably Didier Sornette, but there were others) had moved a portion of their data processing expertise into studying econometric time series. I made this move later as I gradually came to appreciate the key problem with developing quantitative techniques when the data were suspect. First of all, the measurements themselves are inaccurate. More importantly, our estimate of the timing of each observation was just that--an estimate. Most quantitative methods assume that the observations are evenly spaced in time. Failing that, they assume you know the timing of your observations. The consequences of errors in the timing are terrible, and frequently underestimated. The point is that it is difficult to develop excellent quantitative methods when the data are terrible.

The big advantage of working with economic time series--pricing data, in particular, is the elimination of the observational errors. When a transaction occurs, there is no doubt about either the price of the time--right down to the millisecond scale.

I started looking at market macrostructure--because (several years ago) nothing interesting ever happened on a scale of less than about an hour. Until just the past few years. Suddenly, strange, rich, unusual behaviours began to occur in individual stock prices, and even indices, on the millisecond scale. I didn't know what was causing it--but it sure was interesting.


Three seconds on the tilt-a-whirl.

This was the signature of onset of HFT. I was initially interested in it for entirely different reasons than most of you. After Crutchfield's (1994) paper (pdf) on emergence, I had been pondering the idea of how to recognize a fundamental change in a complex system. Again, my interest was in the earth system as a whole, and how to recognize whether or not new observations were pointing to a fundamental change in its mode of operation.

Given our understanding that the number of large avalanches is positively correlated to the number of small avalanches, it seems pretty clear that (as Nanex and Zerohedge has been saying) the damaged market microstructure is mirrored in the increasing number of flash crashes since Reg NMS. Unfortunately, our murky understanding of how the microstructure causes the macrostructural changes can be used by the regulatory authorities to avoid investigation. They can't see a smoking gun.

We would normally expect the micro-crashes to eventually relieve imbalances in the system, improving its long-run stability. (Perhaps this is how the SEC justifies the practice). But unlike earthquakes and avalanches, these uncountably many small crashes are not reducing the imbalances. One reason is that the cause of the imbalances is separate from HFT--the dollars keep being shoveled to the top of the mountain as fast as, if not faster, than HFT brings them cascading down. Another reason is that the trades (mostly) get unwound--so the exchanges push most of the snow back to the mountaintop after the avalanche.

HFT certainly benefits unfairly from the system, but isn't responsible for it. If anything, it is a symptom of corruption--but the cause of the corruption is elsewhere.

Accordingly, my modest proposal for dealing with HFT is this--nothing. Don't bust trades--let them stand. I'd be curious to see the response of the various Ivy-League endowment funds and pension funds when they suffer brutal, near-instantaneous, multi-billion-dollar losses. At a guess, I would probably hear the screaming up here. How would real companies, producing real products, react to a sudden monkey-hammering of their stock price, especially if it triggered debt covenants? Maybe they would all exit the market en masse. It might even force a real change.