Safely narrow down the apparent chaos

There is that thing about me: I like understanding. I represent my internal process of understanding as the interplay of three imaginary entities: the curious ape, the happy bulldog, and the austere monk. The curious ape is the part of me who instinctively reaches for anything new and interesting. The curious ape does basic gauging of that new thing: ‘can kill or hopefully not always?’, ‘edible or unfortunately not without risk?’ etc. When it does not always kill and can be eaten, the happy bulldog is released from its leash. It takes pleasure in rummaging around things, sniffing and digging in the search of adjacent phenomena. Believe me, when my internal happy bulldog starts sniffing around and digging things out, they just pile up. Whenever I study a new topic, the folder I have assigned to it swells like a balloon, with articles, books, reports, websites etc. A moment comes when those piles of adjacent phenomena start needing some order and this is when my internal austere monk steps into the game. His basic tool is the Ockham’s razor, which cuts the obvious from the dubious, and thus, eventually, cuts bullshit off.

In my last update in French, namely in Le modèle d’un marché relativement conformiste, I returned to that business plan for the project EneFin, and the first thing my internal curious ape is gauging right now is the so-called absorption by the market. EneFin is supposed to be an innovative concept, and, as any innovation, it will need to kind of get into the market. It can do so as people in the market will opt for shifting from being just potential users to being the actual ones. In other words, the success of any business depends on a sequence of decisions taken by people who are supposed to be customers.

People are supposed to make decisions regarding my new products or technologies. Decisions have their patterns. I wrote more about this particular issue in an update on this blog, entitled ‘And so I ventured myself into the realm of what people think they can do’, for example. Now, I am interested in the more marketing-oriented, aggregate outcome of those decisions. The commonly used theoretical tool here is the normal distribution(see for example Robertson): we assume that, as customers switch to purchasing that new thing, the population of users grows as a cumulative normal fraction (i.e. fraction based on the normal distribution) of the general population.

As I said, I like understanding. What I want is to really understandthe logic behind simulating aggregate outcomes of customers’ decisions with the help of normal distribution. Right, then let’s do some understanding. Below, I am introducing two graphical presentations of the normal distribution: the first is the ‘official’ one, the second, further below, is my own, uncombed and freshly woken up interpretation.

The normal distribution

 

Normal distribution interpreted

 

So, the logic behind the equation starts biblically: in the beginning, there is chaos. Everyone can do anything. Said chaos occurs in a space, based on the constant e = 2,71828, known as the base of the natural logarithm and reputed to be really handy for studying dynamic processes. This space is ex. Any customer can take any decision in a space made by ‘e’ elevated to the power ‘x’, or the power of the moment. Yes, ‘x’ is a moment, i.e. the moment when we observe the distribution of customers’ decisions.

Chaos gets narrowed down by referring to µ, or the arithmetical average of all the moments studied. This is the expression (x – µ)2or the local variance, observable in the moment x. In order to have an arithmetical average, and have it the same in all the moments ‘x’, we need to close the frame, i.e. to define the set of x’s. Essentially, we are saying to that initial chaos: ‘Look, chaos, it is time to pull yourself together a bit, and so we peg down the set of moments you contain, we draw an average of all those moments, and that average is sort of the point where 50% of you, chaos, is being taken and recognized, and we position every moment xregarding its distance from the average moment µ’.

Thus, the initial chaos ‘e power x’ gets dressed a little, into ‘e power (x – µ)2‘. Still, a dressed chaos is still chaos. Now, there is that old intuition, progressively unfolded by Isaac Newton, Gottfried Wilhelm Leibnizand Abraham de Moivreat the verge of the 17thand 18thcenturies, then grounded by Carl Friedrich Gauss, and Thomas Bayes: chaos is a metaphysical concept born out of insufficient understanding, ‘cause your average reality, babe, has patterns and structures in it.

The way that things structure themselves is most frequently sort of a mainstream fashion, that most events stick to, accompanied by fringe phenomena who want to be remembered as the rebels of their time (right, space-time). The mainstream fashion is observable as an expected value. The big thing about maths is being able to discover by yourself that when you add up all the moments in the apparent chaos, and then you divide the so-obtained sum by the number of moments added, you get a value, which we call arithmetical average, and which actually doesn’t exist in that set of moments, but it sets the mainstream fashion for all the moments in that apparent chaos. Moments tend to stick around the average, whose habitual nickname is ‘µ’.

Once you have the expected value, you can slice your apparent chaos in two, sort of respectively on the right, and on the left of the expected value that doesn’t actually exist. In each of the two slices you can repeat the same operation: add up everything, then divide by the number of items in that everything, and get something expected that doesn’t exist. That second average can have two, alternative properties as for structuring. On the one hand, it can set another mainstream, sort of next door to that first mainstream: moments on one side of the first average tend to cluster and pile up around that second average. Then it means that we have another expected value, and we should split our initial, apparent chaos into two separate chaoses, each with its expected value inside, and study each of them separately. On the other hand, that second average can be sort of insignificant in its power of clustering moments: it is just the average (expected) distance from the first average, and we call it standard deviation, habitually represented with the Greek sigma.

We have the expected distance (i.e. standard deviation) from the expected value in our apparent chaos, and it allows us to call our chaos for further tidying up. We go and slice off some parts of that chaos, which seem not to be really relevant regarding our mainstream. Firstly, we do it by dividing our initial logarithm, being the local variance (x – µ)2, by twice the general variance, or two times sigma power two. We can be even meaner and add a minus sign in front of that divided local variance, and it means that instead of expanding our constant e = 2,71828, into a larger space, we are actually folding it into a smaller space. Thus, we get a space much smaller than the initial ‘e power (x – µ)2‘.

Now, we progressively chip some bits out of that smaller, folded space. We divide it by the standard deviation. I know, technically we multiply it by one divided by standard deviation, but if you are like older than twelve, you can easily understand the equivalence here. Next, we multiply the so-obtained quotient by that funny constant: one divided by the square root of two times π. This constant is 0,39894228 and if my memory is correct is was a big discovery from the part of Carl Friedrich Gauss: in any apparent chaos, you can safely narrow down the number of the realistically possible occurrences to like four tenths of that initial chaos.

After all that chipping we did to our initial, charmingly chaotic ‘e power x‘ space, we get the normal space, or that contained under the curve of normal distribution. This is what the whole theory of probability, and its rich pragmatic cousin, statistics, are about: narrowing down the range of uncertain, future occurrences to a space smaller than ‘anything can happen’. You can do it in many ways, i.e. we have many different statistical distributions. The normal one is like the top dog in that yard, but you can easily experiment with the steps described above and see by yourself what happens. You can kick that Gaussian constant 0,39894228 out of the equation, or you can make it stronger by taking away the square root and just keep two times Ï€ in its denominator; you can divide the local variance (x – µ)2just by one time its cousin general variance instead of twice etc. I am persuaded that this is what Carl Friedrich Gaussdid: he kept experimenting with equations until he came up with something practical.

And so am I, I mean I keep experimenting with equations so as to come up with something practical. I am applying all that elaborate philosophy of harnessed chaos to my EneFinthing and to predicting the number of my customers. As I am using normal distribution as my basic, quantitative screwdriver, I start with assuming that however many customers I got, that however many is always a fraction (percentage) of a total population. This is what statistical distributions are meant to yield: a probability, thus a fraction of reality, elegantly expressed as a percentage.

I take a planning horizon of three years, just as I do in the Business Planning Calculator, that analytical tool you can download from a subpage of https://discoversocialsciences.com. In order to make my curves smoother, I represent those three years as 36 months. This is my set of moments ‘x’, ranging from 1 to 36. The expected, average value that does not exist in that range of moments is the average time that a typical potential customer, out there, in the total population, needs to try and buy energy via EneFin. I have no clue, although I have an intuition. In the research on innovative activity in the realm of renewable energies, I have discovered something like a cycle. It is the time needed for the annual number of patent applications to double, with respect to a given technology (wind, photovoltaic etc.). See Time to come to the ad rem, for example, for more details. That cycle seems to be 7 years in Europe and in the United States, whilst it drops down to 3 years in China.

I stick to 7 years, as I am mostly interested, for the moment, in the European market. Seven years equals 7*12 = 84 months. I provisionally choose those 84 months as my average µfor using normal distribution in my forecast. Now, the standard deviation. Once again, no clue, and an intuition. The intuition’s name is ‘coefficient of variability’, which I baptise ßfor the moment. Variability is the coefficient that you get when you divide standard deviation by the mean average value. Another proportion. The greater the ß, the more dispersed is my set of customers into different subsets: lifestyles, cities, neighbourhoods etc. Conversely, the smaller the ß, the more conformist is that population, with relatively more people sailing in the mainstream. I casually assume my variability to be found somewhere in 0,1 ≤ ß ≤ 2, with a step of 0,1. With µ = 84, that makes my Ω (another symbol for sigma, or standard deviation) fall into 0,1*84 ≤ Ω ≤ 2*84 <=> 8,4 ≤ Ω ≤ 168. At ß = 0,1 => Ω = 8,4my customers are boringly similar to each other, whilst at ß = 2 => Ω = 168they are like separate tribes.

In order to make my presentation simpler, I take three checkpoints in time, namely the end of each consecutive year out of the three. Denominated in months, it gives: the 12thmonth, the 24thmonth, and the 36thmonth. I Table 1, below, you can find the results: the percentage of the market I expect to absorb into EneFin, with the average time of behavioural change in my customers pegged at µ = 84, and at various degrees of disparity between individual behavioural changes.

Table 1 Simulation of absorption in the market, with the average time of behavioural change equal to µ = 84 months

Percentage of the market absorbed
Variability of the population Standard deviation with µ = 84 12th month 24 month 36 month
0,1 8,4 8,1944E-18 6,82798E-13 7,65322E-09
0,2 16,8 1,00458E-05 0,02% 0,23%
0,3 25,2 0,18% 0,86% 2,93%
0,4 33,6 1,02% 3,18% 7,22%
0,5 42 2,09% 5,49% 10,56%
0,6 50,4 2,92% 7,01% 12,42%
0,7 58,8 3,42% 7,80% 13,18%
0,8 67,2 3,67% 8,10% 13,28%
0,9 75,6 3,74% 8,09% 13,02%
1 84 3,72% 7,93% 12,58%
1,1 92,4 3,64% 7,67% 12,05%
1,2 100,8 3,53% 7,38% 11,50%
1,3 109,2 3,41% 7,07% 10,95%
1,4 117,6 3,28% 6,76% 10,43%
1,5 126 3,14% 6,46% 9,93%
1,6 134,4 3,02% 6,18% 9,47%
1,7 142,8 2,89% 5,91% 9,03%
1,8 151,2 2,78% 5,66% 8,63%
1,9 159,6 2,67% 5,42% 8,26%
2 168 2,56% 5,20% 7,91%

I think it is enough science for today. That sunlight will not enjoy itself. It needs me to enjoy it. I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

One thought on “Safely narrow down the apparent chaos

Leave a Reply