Home > Philosophy of Financial Markets Behavior

Philosophy of Financial Markets Behavior

Page 1
University of Ljubljana Faculty of Mathematics and Physics Department of Physics
.
Borut Polajnar
Seminar
Philosophy of Financial Markets Behavior
Advisor: prof. dr. Rudolf Podgornik Ljubljana, March 2008

Page 2
Abstract This seminar paper deals with historical development of the way we view the hustle and bustle of events that constitute a system of financial market. The story starts in the year of 1900 when, then young mathematician Louis Bachelier used mathematics of what is now known as Brownian motion to describe movements of price of assets traded on Paris Bourse. His work, after being over-looked for more than half a century, was then adopted by some of the promi- nent economists of 20th century, of whom Harry M. Markowitz and William F. Sharpe and their contribution to the Modern theory of finance (MTF) will be dealt with specifically. The concepts devel- oped were part of every financial adviser’s or analyst’s toolkit in form of frame of thinking and software support, but came under major scrutiny after some catastrophic market breaks occurring throughout the otherwise very prosperous 90’s. Better late than never, would ar- gue Benoit B. Mandelbrot, the man to whom already in the 60’s were known the deficits of using independent Gaussian variables as the platform on which to build good quantitative description of financial markets. Mandelbrot has certainly revolutionized the world of science with his work, the field of economics being one of the most subtle of his interests that set him on the path of his greatest discovery, that is fractal geometry.
1

Page 3
1 Introduction - the dichotomy of the world we live in
1.1 Deterministic versus probabilistic view
The book Essai philosophique sur les probabilités by Pierre-Simon Laplace, although discussing probability issues, starts with the fol- lowing words [1]: We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single for- mula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. Einstein, being famously succinct in his sayings, expressed a similar notion by his renowned words [2]: God does not play dice. Both men, although not avoiding dealing with probability theory or using it as a tool to solving problems, still held determinism in higher esteem over probabilistic view of nature. One could even say they believed that probabilistic explanation is not understanding the problem, but merely describing it. Mandelbrot, on the other side, argues that “being able to model reality is a form of understanding” [3]. His firm belief is that the world of finance cannot be tamed by some deterministic rules that would describe its past and its future, both at the same time through its present state1. Among other numerous factors that influence market behavior he pinpoints anticipation, as a factor that could never fit under deterministic umbrella of cause-and-effect [4]: And there is the most confounding factor of all, anticipa- tion. A stock price rises not because of good news from the company, but because the brightening outlook for the stock means investors anticipate it will rise further, and so they buy. Anticipation is a feature unique to economics. It is psychology, individual and mass - even harder to fathom
1One can discuss determinism of financial markets only ex post facto, whilst ex ante
description probability theory offers proves to be the only manageable one.
2

Page 4
than the paradoxes of quantum mechanics. Anticipation is the stuff of dreams and vapors. Still, his belief in the form of ‘probabilistic understanding’ is not of kind allowing for any kind of model that comes close to describing reality. He can be labeled reductionist, in that he is demanding prob- abilistic model being as parsimonious as possible. In his view, there is no knowledge in elaborate algorithmic structures that grow out of patching the noted discrepancies of models with reality. Rather he demands generalization to the extent when all the characteristics of observed reality follow from a self-contained, albeit random generator.
1.2 ‘Mild’ versus ‘wild’ view
Historically speaking, one of the most important implementations of probabilistic description was in handling of error terms in measure- ments. It was inferred that aggregated errors should preserve distri- bution of every which aggregating element. It was for the stability argument, and also for having well-defined and quickly convergent mean and variance – and all other moments, as a matter of fact – that the Gaussian or Normal distribution was chosen to serve as represent- ing distribution of error terms. It was also noted that the Gaussian served as a limit for many of the other distributions under aggregation – the fact known today as Central limit theorem (CLT). Implications of CLT have a great influence on our life, in that it is commonly believed that every process will sooner or later re- sult in a aggregation that is Gaussian distributed and therefore fully defined by two simple, self-explanatory parameters, mean µ and stan- dard deviation σ. Such belief has as a direct corollary that the world is perceived as being made of tiny pieces. After all, practically every element (95%) falls quite safely under the sway of 2σ from the mean. Gaussian distribution does not leave much space for extraordinary ex- ceptions, therefore all the constituent pieces fall together to form a greater picture, with not much harm done had any of the aggregating elements been taken out or added to the bunch. But what if the creator of the world were not a coin flipper whose cumulative wins or loses sooner or later fit the familiar bell-shaped curve with well-defined averages, but rather a blindfolded archer, who chose the size of the building blocks of world by the amount he missed the designated target on an infinite wall. For fictitious example of such archer being 1 unit from wall that he randomly shoots at, the distribution of length of misses from the center can be calculated as dP dx = dP dϕ ∣ ∣ ∣ ∣ dϕ dx ∣ ∣ ∣ ∣ = 1 π(1 + tan2 ϕ) = 1 π(1 + x2) , (1)
3

Page 5
since x = tanϕ. Cauchy probability density, defined by (1), has so called ‘fat tails’, since dP/dx ∼ x−2 as x → ∞. Following power-law for large x as opposed to exponential Gaussian decay, has as a direct corollary in this particular case an infinite variance. Aggregation of Cauchy variables 2 could not and would not be perceived as a compo- sition of timid grains, but of elements varying rapidly in magnitude, some of them having size comparable to the size of the aggregation as a whole. Looking at composite structure made of ‘Cauchy bricks’ could therefore be described as revealing rough, discontinuous, ‘wild’ fluctuations, as opposed to timid, continuous, ‘mild’ fluctuations in the case of ‘Gaussian bricks’.
Figure 1: Sample averages ∑
N i Xq/N for q = 1 (lower graphs) and q = 2
(upper graphs) for Cauchy-like probability density (Pr{X>x} ∼ x−1, for large values of x) with varying sample-size N for three distinct sample groups. A kind of erratic behavior is easily perceived that is both sample-size and sample dependent. Averages are clearly not easily calculated when dealing with wild variability. Graphics reproduced from [3].
The mild and the wild are terms both chosen by Benoit Mandel- brot to describe what he calls states of randomness, with intentional allusion to the states of matter. Mild reflects solid state, having low energies and well-defined structure and volume, whilst wild reflects gaseous state with high energies and non-existent structure. There
2It should be noted that Cauchy distribution, like Gaussian, is a stable one. They are
both special instances of general class of L-stable distributions to be introduced in section 3.1.1.
4

Page 6
is a third state to reflect liquid state, which Mandelbrot calls ‘slow’. Further division of states may be obtained by various more specialized criteria, but the subject is beyond the scope of this seminar. The list of all states of randomness can be found in [3] (p. 140-141), with more detailed explanation in accompanying chapter. It was long believed that the principle way in which the world was built was mild. Famous economist Alfred Marshall asserted such belief by saying: “natura non facit saltum” (quotation can be found as illustration of traditional views on nature both in [3] and [4]), which translated means nature does not undergo jumps. Nature is often assumed to proceed in smooth continuous fashion. However, as it will be shown in subsequent chapters, the ‘normality’ of Gaussian world comes under severe questioning when the financial markets are concerned, since their behavior is not tamed, but rough and wild. But let us first describe how the classical view of financial world, which incorporates the mild randomness as its key part, came to exist and how does it function in principle.
2 The classical view
2.1 What is the nature of financial markets?
Louis Bachelier in his doctoral thesis had no doubt to whether or not the description of market should be a probabilistic one. After listing some of the probabilistic pros, he firmly stated [5]: Although the market does not predict the movements, it does consider them as being more or less likely, and this proba- bility can be evaluated mathematically. For the successive movements of price Bachelier predicated Gaus- sian distribution, also assuming that such movements are independent. The choice was motivated by the fact that, apart from stability under aggregation, he demanded the mathematical expectation of speculator to be zero. Such assertion is description of what is now called Effi- cient market hypothesis. According to [6] it was not properly mathe- matically formulated until 1965 when Paul Samuelson stated through equation E(Zt+1|Zt,Zt−1,...,Z0) = Zt, (2) that the expected value of future price Zt+1, knowing all the previous prices Zt,Zt−1,...,Z0, is the current price Zt itself. Price variation process is therefore a martingale3, Gaussian stable process, or more
3More about martingales can be found, for instance, in [7].
5

Page 7
commonly Brownian motion, being the simplest stable process of that form. Bachelier perceived the connection between price change moves and the fundamental solution of heat transfer equation, the most im- portant result of this parallel being relation σ ∼ √ t. (3) Therefore, as might come as some surprise to some believers in primacy of physics over all other sciences, he made the connection between Brownian motion and its continuous time limit in form of diffusion or heat transfer process 5 years before Einstein independently did the same in his famous paper on molecular motion. Last but not least, Bachelier can also be credited with being self- critical as to notice that some jumps of price simply did not fit un- der the reach of bell-shaped Gaussian curve. Still, he dismissed such jumps, treating them as ‘contaminators’ or ‘outliers’.
2.2 What is risk?
When trying to determine what efficient investing would be, Harry Markowitz came up with the idea that rational investor’s utility func- tion U4 should only depend on two parameters, namely expected re- turn rate E(R) and risk σ. Marking risk as σ is no coincidence, since Markowitz, like Bachelier before him, adopted the idea of Gaussian movement of prices, predicating standard deviation a good measure of risk. Such thinking lead him to formulation of efficient portfolio as being one having [8]: • maximal expected return rate at a given risk level or equivalently • minimal level of risk at a given expected return rate. With the assumption of mild Gaussian price movements, the ex- pected return rate E(RP ) and risk σP of portfolio P, compound of individual assets or sub portfolios i, with weights wi, and mutual cor- relation factors ρij, are [9]: E(RP ) = ∑
i
wiE(Ri) (4) σ2
P
= ∑
i
w2
i σ2 i +

i

j
wiwjσiσjρij. (5)
4Utility function is a measure of individual’s preferability. Rational investor therefore
thrives to maximize his utility function.
6

Page 8
One can easily see, that given assets with similar expected return, it is preferable to combine ones that are uncorrelated or even anti- correlated in order to minimize risk - the concept well described with the single word of diversification. In a general case of N available assets or sub portfolios, 2N possible combinations fill up the portion of return-risk space whose boundary is the hyperbola-like curve, the upper half of which is called Efficient portfolio frontier or Markowitz frontier. On it lay portfolios satisfying stated condition of efficiency. Markowitz’s theories, extended with the works of others, including Sharpe, whose contribution is discussed in the following section, are now collectively known as Modern portfolio theory (MPT).
Figure 2: The risk-return space filled with sample portfolios. The ones lying on the upper half of hyperbola-like curve that borders the space that sample portfolios fill, are the so called efficient portfolios. The tangency portfolio marked by darker spot is more commonly known as market portfolio and is characterized by largest Sharpe measure of all efficient portfolios. Sharpe measure is exactly the slope of Capital allocation line (CAL). Further details are explained in section 2.3. Graphics reproduced from [9].
2.3 What is the value of an asset?
As Markowitz’s doctoral student, William Sharpe was confronted with a task of simplifying calculations of MPT. Apart from initial demand to calculate N(N −1)/2 correlation factors, 2N iterations needed to be done to perform the full calculation for N elements in form of assets or sub portfolios. While pondering on it, Sharpe realized that among ef- ficient portfolios, one with maximum reward-to-risk ratio could be sin- gled out. In the context of MPT reward-to-risk ratio is called Sharpe
7

Page 9
measure and is defined as [10]: S = E(RP ) − Rf σP , (6) where Rf is the risk free rate that can be emulated by a rate of short-term government-issued bonds. Portfolio with maximal Sharpe measure came to be known as market portfolio, since market is by implications of Efficient market hypothesis the most optimal trader that cannot be beaten in its craft. Such logic marked the dawn of stock-index funds that make possible investing in shares in the same proportion as the real market does. But while reducing the task of creating efficient portfolio by simply letting the market itself do the math, he also devised a cleaver way to value an individual asset. Market being the almighty arbiter, the optimal risk premium E(Ri)−Rf of an asset should equal that of the market itself, multiplied by the so called systematic or market risk factor of investment i on a given market m, labeled usually as βim. To summarize [11]: E(Ri) − Rf = βim(Rm − Rf ), (7) where βim in turn is defined as [11]: βim = Cov(Ri,Rm) Var(Rm) = ρim σi σm (8) After estimating E(Ri), net present value (NPV) 5 of an asset can readily be calculated, where future cash flows are determined by some sort of fundamental analysis. Obtained value should be an optimal price of an asset, the most appealing feature being the classification of diverse asset values comprising real market by a single variable β. Sharpe’s asset valuing model came to be known as Capital asset pricing model (CAPM), which is itself an integral part of MPT.
2.4 The misfits of classical view
Considering price charts alone, Brownian motion passes as a respecta- ble model (see figure 3). However, when price moves are considered,
5The NPV formula emulates the notion of ‘time value of money’. NPV is calculated
as the sum of future cash flows, each of them discounted by interests factor of form (1 + E(Ri))ti [12], where E(Ri) is the expected rate of return (or some form of interests), relevant at time i, and ti is portion of time from present to i in fraction-of-year units. In case of one-time present evaluation of E(R) the value of the asset could be determined as ∑
i CFi/(1 + E(R))ti .
8

Page 10
Brownian motion induced white noise clearly stands out as a very un- realistic option (see figure 4). Being direct heirs of Bachelier’s model, Markowitz’s and Sharpe’s theories inherited bad assumptions of mild Gaussian variation, especially critical being presumed continuity of price change. Out of this assumption grew very intuitive but flawed measure for risk in the form of standard deviation of price changes σ and a similar notion of variance against the market in form of Sharpe’s β. Both have the misfortune of being critically unstable in the case of wild variability (see figure 1). Further shaky assumptions are those connected to rationality of investors, which, taken to the extreme, ren- ders them all equal, where real-case situation proves to be quite the opposite. People can be quite ‘irrational’, the ‘friction’ between many behavioral groups they form being one of the main generators of wild variability.
Figure 3: The so called fever charts of price. It is hard to distinguish between two model and two real charts. From the top the charts are: IBM stock, Brownian motion, USD/DEM exchange rate and multifractal model of price change. Graphics reproduced from [4].
3 New deal
3.1 What is the nature of financial markets?
Like Louis Bachelier before him, Benoit Mandelbrot is himself a true believer in overwhelming power of probabilistic approach. Yet he was
9

Page 11
Figure 4: The charts of relative price changes. Here the inadequacy of Brow- nian motion model quickly becomes apparent. Its white noise signal lacks three main characteristics of real charts: discontinuity, long-term depen- dence and clustering of volatility. The multifractal model, however, remains indistinguishable from real charts. As in previous figure, from the top the charts are: IBM stock, Brownian motion, USD/DEM exchange rate and multifractal model of price change. Graphics reproduced from [4].
the one to do a crucial step away from Gaussian view of the world, becoming fascinated with power-law distributions, instead. These are often referred to as scaling, because of their invariance under condi- tioning W defined as 6 P(u) = Pr{U>u}
W
−→ PW (u) = Pr{U>u|U>w} = P(u) P(w) . (9) Now, for power-law probability distribution P(u)=(u/˜u)−α, condi- tional probability distribution is according to (9): PW (u) = (u/˜u)−α (w/˜u)−α = ( u w )−α , (10) thus preserving functional form, that is except for the change in scale.
6Result (9) can easily be understood. Consider the probability density p(x). Knowing
U>w does not effect relative probability of any of the possible values to occur, it only narrows the interval on which probability is spread. To calculate conditioned probability density pW (x), one only multiplies (rescales) unconditioned probability density p(x) with a constant A so that ∫
∞ w
Ap(x)dx = 1. Thus pW (x) = p(x)/P(w).
10

Page 12
There is another typical infinite range distribution that also pre- serves functional form under conditioning, namely exponential distri- bution P(u) = exp(−λ(u − u0)): PW (u) = e−λ(u−u0) e−λw = e
−λ(u−˜u)
, (11) but the change is in location rather than scale. To illustrate the point, one can observe that the moments of power-law distribution become dependent on conditioning W7: E(U
q W
) = − ∫ ∞
w
uqdP(u) = α α − q wq,for q < α, (12) whilst in the exponential case they clearly stay the same, since trans- lation cannot change the surface under the functional curve by it- self. Scaling distribution is therefore term used exclusively to refer to power-law distributions. Of course, the fact that exponential does not change under condi- tioning comes as no surprise to a physicist, to whom it is perfectly clear that radioactive decay at a certain moment in time has nothing to do with the length of life of particular nuclei. But what about when it comes to people, and the social structures and systems we have build? It turns out that power-laws are everywhere. The first of them was empirically discovered by Italian economists Vilfredo Pareto, who in 1909 observed that the wealth distribution had a power-law tail, with Pareto’s estimation for α to be around 3/2.8 Another striking thing was that the same α could be obtained for various countries and his- torical eras. Pareto himself was so astounded by his discovery, that he claimed this intriguing fact to be the consequence of “something (some fundamental law) in the nature of men” [4]. Another man believing in the power of power-laws was George Kingsley Zipf.9 In his book Human Behavior and the Principle of Least Effort he accounted for almost every social phenomena he could think of with a power-law. Perhaps one of the most fascinating is scaling of word frequencies in a given text or speech10. Paying his attention to James Joyce’s Ulysses specifically, he estimated the α ex- ponent to be around 1. Unfortunately, universality of that particular
7Since P(u) = ∫ ∞ u
p(x)dx, probability density can be obtained by differentiating prob- ability distribution, that is p(u) = −dP(u)/du.
8References on Pareto can be found in [3], [4] and [6]. 9References on Zipf can be found in [3] and [4]. 10Zipf granted every word a rank according to its frequency – the most frequent word
got rank 1, the second most frequent rank 2 etc. – and noted that the frequency is a power-law function of the granted rank.
11

Page 13
law is not as far reaching as in Pareto’s case, since people as eloquent as James Joyce are not what one would call a ‘representative sample’. It should also be noted that in the title of Zipf’s book the notion of ‘the principle of least effort’ appears. By using it, Zipf was referring to the fact that scaling distributions fit well the folklore and common wisdom type of ‘knowledge’, such as that luck or wealth produces even more luck and wealth. Suppose now not only that the wealth distribution is scaling, but also the amount of wealth accumulated in one’s lifetime – which is not so different notion, after all. Then according to (12), for α = 2, having accumulated w of wealth, one is expected to accumulate at least as much until his or hers life’s work is brought to an irreversible halt. Being a ‘fundamental law’ of human nature that a path of least resistance is a preferable one, Zipf assumed that our nature in some way induces the distributions that enable one’s following such path. Once a certain amount of wealth is accumulated, not much more need to be done in the world of scaling, since what is expected is that the wealth will multiply ‘by itself’. Zipf was also convinced that power-laws were something intrinsic to human nature and therefore social sciences, but as it later turned out they are quite common in our physical world as well. In physics a whole new field dealing with critical phenomena emerged in which scaling found its part to play.
3.1.1 The meaning of cotton - Noah effect
So, what about the changes of price? Certainly the Gaussian does not fit, as Benoit Mandelbrot found out when exploring the case of historical cotton price moves in the 60’s. Adding to the population, of say sample monthly data, one price datum after another, the sample volatility11 was violently changing with constant occurrence of ‘pollut- ing events’ and ‘outliers’, rendering it impossible to conclude that price changes followed a ‘simple’ Brownian motion. However, exploring the distribution of difference of logarithms of price12 L(t, T), defined as L(t, T) = log Z(t + T) − log Z(t), Z(t) being the spot price of cotton at time t, for different time-spans T, it turned out that the relation log P
±
(l) = −αlog(±l) (13)
11Volatility is a term used to describe variability of price changes and is often used as a
synonym for standard deviation.
12Change in logarithm of price is often dealt with instead of the absolute price change
as such. The obvious motive is, of course, to render proportional price changes equal. Apart from that, the use of logarithm preserves additive nature of price change, whereas it would be turned into multiplicative process, had relative price changes been used.
12

Page 14
holds, where P±(l) are Pr{L(t, T) > l} for positive values of l and Pr{L(t, T) < l} for negative values of l, respectively, and α ≈ 1.7. Relation (13) is a clear-cut expression of scaling property underlying the price change process. The most appealing aspect of the matter is that the data show coefficient α being independent of time t and time-span T. Thus scaling principle does not change with historical time nor it is affected by aggregation. The only thing that changes being the scale of price changes as such, result is very pleasing in that it is expressing a sort of universal principle behind the process, avoiding at the same time the need for Gaussian assumption.
Figure 5: Graphical representation of scaling property of cotton price change process. Positive (1) and negative (2) price changes are dealt with separately. Apart from that, three cases are presented: a – daily price changes from 1900- 1905, b – daily price changes from 1944-1958 and c – monthly price changes from 1880-1940. All lines exhibit scaling with α ≈ 1.7. Graphics reproduced from [3].
To classify the distribution he found, Mandelbrot turned his atten- tion to the work of his professor from Paris, Paul Lévy, who solved for general problem of stable probability density’s characteristic function ϕ(q), finding that it should be of the form [6]: lnϕ(q) = { iµq − γ|q|α[1 − iβ q
|q|
tan(π
2
α)], forα = 1 iµq − γ|q|[1 + iβ
q |q| 2 π
ln|q|], forα = 1 , (14) where the four parameters are alphabetically: • α ∈ (0,2] – kurtosis factor
13

Page 15
• β ∈ [−1,1] – skewness factor • γ ∈ (0,∞) – scale factor • µ – location factor.
Figure 6: Samples of L-stable probability densities for varying α (left case) and β (right case). With decreasing kurtosis factor α the curves are be- coming more and more leptokurtic, that is gaining sharper and narrower peaks and fat-tails. Having skewness factor β = 0 has as a affect asymmetry in probability distribution. Varying scale factor γ, or c as it is labeled on graphs, would stretch or compress the curve the way the changing σ effects the shape of a Gaussian curve. Varying location factor µ would only shift the peak around. Graphics reproduced from [13].
Only three of the functions in the entire class have their respective analytical form, namely Cauchy (α = 1 and β = 0), Gaussian (α = 2 and β = 0) and Lévy-Smirnov (α = 1/2 and β = 1) probability density functions. For large values of u, with honorable exception of Gaussian, it holds that P(u) = ∫ ∞
u
F
−1
(ϕ(q))dx ∼ u
−α
, (15) where F−1 is reverse Fourier transform respectively. Distribution Mandelbrot found in the case of cotton price changes therefore belongs to the class of L(évy)-stable distributions. The stochastic process itself is often referred to as the L(évy)-stable flight. Being a description of extreme events driven process, Mandelbrot calls L-stable variability Noah effect, after the tale of great floods from the Old Testament.
3.1.2 The meaning of Nile - Joseph effect
One now justifiably wonders what exactly has the river Nile to do with price changes and financial markets in general. Well, as fate would have it, Nile came to be a source of influence for Benoit Mandelbrot,
14

Page 16
specifically through the work of Harold E. Hurst, who in his study of Nile noticed that [4]: Although many natural phenomena have a nearly normal frequency distribution this is only the case when their order of occurrence is ignored. When records of natural phenom- ena extend over long periods there are considerable varia- tions both of means and standard deviations from one pe- riod to another. The tendency to occur in groups makes both the mean and the standard deviation computed from short period of years more variable than is the case in ran- dom distributions. Hurst summed his thinking and facts provided by experimental data into the result that the range R of an optimal dam that would perfectly dampen variability in the river Nile discharges over the period of N years, is related to the average value of yearly standard deviation σy by [4]: log ( R σy ) = K log (N 2 ) , (16) where he measured K to be approximately 0,7. Thus, even though that shuffled yearly changes fit a perfect Gaussian, the persistence of the process causes the variability of aggregated process to change differently than in the case of perfectly uncorrelated changes, where K = 1/2 would have been measured. Mandelbrot formalized Hurst’s empirical laws in defining the pro- cess of Fractional Brownian motion, that is characterized by properties [3] E(BH(t + T) − BH(t)) = 0 (17) and E((BH(t + T) − BH(t))2) = T2H, (18) with constant H ∈ [0,1], now know as Hurst-Hölder exponent. For such a process the correlation C between past and future average, defined as (B(t+T)−B(t))/T and (B(t)−B(t−T))/T, respectively, is [3] C = 1 2 (2T)2H − T2H − T2H T2H = 22H−1 − 1. (19) With the exception of ‘usual’ Brownian motion with H = 1/2, for which it is zero, the correlation does not vanish for any value of time- span T, thus implying long-term, even infinite dependence. Exponents H = 1/2 are found in many price series, one notable example being exchange rates for currencies.
15

Page 17
Again the biblical example has given alternative name for the long- term dependence driven variability. After Joseph, son of Jacob, who interpreted Pharaoh’s dream of seven fat cows eaten by seven lean cows in terms of 7 good and prosperous and 7 years of famine, it is called Joseph effect.
3.1.3 Noah and Joseph joining hands
Figure 7: The distortion of price change through trading time. The distor- tion through line at 45◦ to any of the axis defining θ-t plane would have no consequence, whilst breaking this line into parts of various slopes has as an effect packaging of many units of trading time into a single unit of physical time, or reversely stretching a single unit of trading time over many units of physical time. The former case being emulation of quick-running market with many big jumps over short periods of physical time, the latter case being its opposite in the form of slow-running market with moderate change over lengthy periods of physical time. Multifractal model is capable of ac- counting for all three major discrepancies between original Brownian motion and reality, namely discontinuity of price change, long-term dependence and clustering of volatility. Graphics reproduced from [4].
The Noah and Joseph effect are instances of fractal models of price change. The essence of fractality being repeatability on all scales, it is the scaling exponent α and dependence exponent H that measure fractality for Noah and Joseph effect, respectively, since both are scale- independent constants embodying essence of a ‘greater truth’ about
16

Page 18
the market behavior, repeatedly seen on all scales of observation. Of course, the question arises whether both concepts can be combined to form an even better model of the way the markets behave. The answer is that such combination can be made. And it is through the ‘distortion’ of physical time with the intention to mold it into the concept of trading time that this is achieved. Such distortion is reasonable in the sense that it emulates the fact that markets can move ‘faster’ or ‘slower’. There are times when a lot of information amasses in a mere hour of trading and there are times when nothing happens for nearly a week. Surely such intervals cannot be treated on equal footing. The technique itself is a well-known concept of compounding 13. Market time θα is called directing function, whilst price moving function BH is called a compounding process14. The result is a multifractal model of price change, namely BαH(t) = BH(θα(t)). (20) Compound process BαH(T) gives rise to countless new options of vari- ability in modeling price changes. It is worth mentioning – as a sort of satisfying token of internal consistency of the theory –, that Brownian motion of properly chosen fractal time B1/2(θα(t)) reproduces exactly L-stable flight with exponent α. But it is, of course, the general case of fractional Brownian motion in fractal time, giving rise to genuine multifractality, that is most interesting. Such is the model capable of describing to a very satisfying level most price records of various assets - including those with scaling exponent α > 2, since stability is guaranteed by compounding process and therefore its restrictions need not be imposed on directing function itself. For illustrative example of how characteristics of compounding process can be altered through subordination to directing function see figure 7.
3.2 What is risk and what is an asset worth? - Conclusion
Not explicitly recognized by the section 2, the questions of risk and value of a particular asset are essentially the same thing, since in valuing an asset it all comes down to evaluating its risk. Unfortunately, the risk yard stick in the case of wild variability is not as apparent and
13More about compounding can be found, for instance, in [7], where the technique is
referred to as subordination.
14The choice of indexes show that the underlying concept is to induce extreme price
jumps through occasional packaging of loads of trading time into a unit of physical time, whilst long-term memory remains property of subordinated price changing motion.
17

Page 19
intuitive as standard deviation is in the Gaussian case, since variance is not defined for the general case of α < 2. And even when it is defined, its convergence is not tamed, but can vary greatly depending on sample size or the sample as such, and can therefore be misleading. The α and H are, of course, by themselves a measure of risk, telling us the story of markets that are far riskier on general than in α = 2 and H = 1/2, that is mild Gaussian case. Still, neatly packed theories and cookbook recipes like those of MPT are yet to be developed in the case of wild variability.
Figure 8: To analyse ruin problems is one possible approach to better assess- ing risk under wild variability conditions. Here, the simplified model of the ruin problem in insurance business is presented. Linear trend in the growth of capital is due to collecting premiums, whilst drops are due to paying variable claims. The dangerous world of probabilistic wildness resulting in real-life bankruptcies reveals itself immediately to contrast the steady growth Gaussian prediction. Graphics reproduced from [4].
18

Page 20
References
[1] http://en.wikipedia.org/wiki/Laplace (3/2008) [2] http://en.wikiquote.org/wiki/Albert Einstein (3/2008) [3] Benoit B. Mandelbrot, Fractals and Scaling in Finance: Disconti- nuity, Concentration, Risk. New York: Springer (1997) [4] Benoit B. Mandelbrot and R. L. Hudson, The Mis(behavior) of Markets: A Fractal View of Risk, Ruin, and Reward. New York: Basic Books (2004) [5] Mark Davis and Alison Etheridge, Louis Bachelier’s Theory of Speculation: The Origins of Modern Finance, Princeton University Press (2006) [6] Rosario N. Mantegna and Eugene H. Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance. Cam- bridge: Cambridge University Press (2000) [7] William Feller, An Introduction to Probability Theory and Its Ap- plications, New York: Wiley (1970) [8] http://www.investorwords.com/1673/efficient portfolio.html (3/2008) [9] http://en.wikipedia.org/wiki/Modern portfolio theory (3/2008) [10] http://en.wikipedia.org/wiki/Sharpe ratio (3/2008) [11] http://en.wikipedia.org/wiki/Capital Asset Pricing Model (3/2008) [12] http://en.wikipedia.org/wiki/Net present value (3/2008) [13] http://en.wikipedia.org/wiki/Levy skew alpha-stable distribution (3/2008)
Search more related documents:Philosophy of Financial Markets Behavior

Set Home | Add to Favorites

All Rights Reserved Powered by Free Document Search and Download

Copyright © 2011
This site does not host pdf,doc,ppt,xls,rtf,txt files all document are the property of their respective owners. complaint#downhi.com
TOP