Safely narrow down the apparent chaos

There is that thing about me: I like understanding. I represent my internal process of understanding as the interplay of three imaginary entities: the curious ape, the happy bulldog, and the austere monk. The curious ape is the part of me who instinctively reaches for anything new and interesting. The curious ape does basic gauging of that new thing: ‘can kill or hopefully not always?’, ‘edible or unfortunately not without risk?’ etc. When it does not always kill and can be eaten, the happy bulldog is released from its leash. It takes pleasure in rummaging around things, sniffing and digging in the search of adjacent phenomena. Believe me, when my internal happy bulldog starts sniffing around and digging things out, they just pile up. Whenever I study a new topic, the folder I have assigned to it swells like a balloon, with articles, books, reports, websites etc. A moment comes when those piles of adjacent phenomena start needing some order and this is when my internal austere monk steps into the game. His basic tool is the Ockham’s razor, which cuts the obvious from the dubious, and thus, eventually, cuts bullshit off.

In my last update in French, namely in Le modèle d’un marché relativement conformiste, I returned to that business plan for the project EneFin, and the first thing my internal curious ape is gauging right now is the so-called absorption by the market. EneFin is supposed to be an innovative concept, and, as any innovation, it will need to kind of get into the market. It can do so as people in the market will opt for shifting from being just potential users to being the actual ones. In other words, the success of any business depends on a sequence of decisions taken by people who are supposed to be customers.

People are supposed to make decisions regarding my new products or technologies. Decisions have their patterns. I wrote more about this particular issue in an update on this blog, entitled ‘And so I ventured myself into the realm of what people think they can do’, for example. Now, I am interested in the more marketing-oriented, aggregate outcome of those decisions. The commonly used theoretical tool here is the normal distribution(see for example Robertson): we assume that, as customers switch to purchasing that new thing, the population of users grows as a cumulative normal fraction (i.e. fraction based on the normal distribution) of the general population.

As I said, I like understanding. What I want is to really understandthe logic behind simulating aggregate outcomes of customers’ decisions with the help of normal distribution. Right, then let’s do some understanding. Below, I am introducing two graphical presentations of the normal distribution: the first is the ‘official’ one, the second, further below, is my own, uncombed and freshly woken up interpretation.

The normal distribution


Normal distribution interpreted


So, the logic behind the equation starts biblically: in the beginning, there is chaos. Everyone can do anything. Said chaos occurs in a space, based on the constant e = 2,71828, known as the base of the natural logarithm and reputed to be really handy for studying dynamic processes. This space is ex. Any customer can take any decision in a space made by ‘e’ elevated to the power ‘x’, or the power of the moment. Yes, ‘x’ is a moment, i.e. the moment when we observe the distribution of customers’ decisions.

Chaos gets narrowed down by referring to µ, or the arithmetical average of all the moments studied. This is the expression (x – µ)2or the local variance, observable in the moment x. In order to have an arithmetical average, and have it the same in all the moments ‘x’, we need to close the frame, i.e. to define the set of x’s. Essentially, we are saying to that initial chaos: ‘Look, chaos, it is time to pull yourself together a bit, and so we peg down the set of moments you contain, we draw an average of all those moments, and that average is sort of the point where 50% of you, chaos, is being taken and recognized, and we position every moment xregarding its distance from the average moment µ’.

Thus, the initial chaos ‘e power x’ gets dressed a little, into ‘e power (x – µ)2‘. Still, a dressed chaos is still chaos. Now, there is that old intuition, progressively unfolded by Isaac Newton, Gottfried Wilhelm Leibnizand Abraham de Moivreat the verge of the 17thand 18thcenturies, then grounded by Carl Friedrich Gauss, and Thomas Bayes: chaos is a metaphysical concept born out of insufficient understanding, ‘cause your average reality, babe, has patterns and structures in it.

The way that things structure themselves is most frequently sort of a mainstream fashion, that most events stick to, accompanied by fringe phenomena who want to be remembered as the rebels of their time (right, space-time). The mainstream fashion is observable as an expected value. The big thing about maths is being able to discover by yourself that when you add up all the moments in the apparent chaos, and then you divide the so-obtained sum by the number of moments added, you get a value, which we call arithmetical average, and which actually doesn’t exist in that set of moments, but it sets the mainstream fashion for all the moments in that apparent chaos. Moments tend to stick around the average, whose habitual nickname is ‘µ’.

Once you have the expected value, you can slice your apparent chaos in two, sort of respectively on the right, and on the left of the expected value that doesn’t actually exist. In each of the two slices you can repeat the same operation: add up everything, then divide by the number of items in that everything, and get something expected that doesn’t exist. That second average can have two, alternative properties as for structuring. On the one hand, it can set another mainstream, sort of next door to that first mainstream: moments on one side of the first average tend to cluster and pile up around that second average. Then it means that we have another expected value, and we should split our initial, apparent chaos into two separate chaoses, each with its expected value inside, and study each of them separately. On the other hand, that second average can be sort of insignificant in its power of clustering moments: it is just the average (expected) distance from the first average, and we call it standard deviation, habitually represented with the Greek sigma.

We have the expected distance (i.e. standard deviation) from the expected value in our apparent chaos, and it allows us to call our chaos for further tidying up. We go and slice off some parts of that chaos, which seem not to be really relevant regarding our mainstream. Firstly, we do it by dividing our initial logarithm, being the local variance (x – µ)2, by twice the general variance, or two times sigma power two. We can be even meaner and add a minus sign in front of that divided local variance, and it means that instead of expanding our constant e = 2,71828, into a larger space, we are actually folding it into a smaller space. Thus, we get a space much smaller than the initial ‘e power (x – µ)2‘.

Now, we progressively chip some bits out of that smaller, folded space. We divide it by the standard deviation. I know, technically we multiply it by one divided by standard deviation, but if you are like older than twelve, you can easily understand the equivalence here. Next, we multiply the so-obtained quotient by that funny constant: one divided by the square root of two times π. This constant is 0,39894228 and if my memory is correct is was a big discovery from the part of Carl Friedrich Gauss: in any apparent chaos, you can safely narrow down the number of the realistically possible occurrences to like four tenths of that initial chaos.

After all that chipping we did to our initial, charmingly chaotic ‘e power x‘ space, we get the normal space, or that contained under the curve of normal distribution. This is what the whole theory of probability, and its rich pragmatic cousin, statistics, are about: narrowing down the range of uncertain, future occurrences to a space smaller than ‘anything can happen’. You can do it in many ways, i.e. we have many different statistical distributions. The normal one is like the top dog in that yard, but you can easily experiment with the steps described above and see by yourself what happens. You can kick that Gaussian constant 0,39894228 out of the equation, or you can make it stronger by taking away the square root and just keep two times π in its denominator; you can divide the local variance (x – µ)2just by one time its cousin general variance instead of twice etc. I am persuaded that this is what Carl Friedrich Gaussdid: he kept experimenting with equations until he came up with something practical.

And so am I, I mean I keep experimenting with equations so as to come up with something practical. I am applying all that elaborate philosophy of harnessed chaos to my EneFinthing and to predicting the number of my customers. As I am using normal distribution as my basic, quantitative screwdriver, I start with assuming that however many customers I got, that however many is always a fraction (percentage) of a total population. This is what statistical distributions are meant to yield: a probability, thus a fraction of reality, elegantly expressed as a percentage.

I take a planning horizon of three years, just as I do in the Business Planning Calculator, that analytical tool you can download from a subpage of In order to make my curves smoother, I represent those three years as 36 months. This is my set of moments ‘x’, ranging from 1 to 36. The expected, average value that does not exist in that range of moments is the average time that a typical potential customer, out there, in the total population, needs to try and buy energy via EneFin. I have no clue, although I have an intuition. In the research on innovative activity in the realm of renewable energies, I have discovered something like a cycle. It is the time needed for the annual number of patent applications to double, with respect to a given technology (wind, photovoltaic etc.). See Time to come to the ad rem, for example, for more details. That cycle seems to be 7 years in Europe and in the United States, whilst it drops down to 3 years in China.

I stick to 7 years, as I am mostly interested, for the moment, in the European market. Seven years equals 7*12 = 84 months. I provisionally choose those 84 months as my average µfor using normal distribution in my forecast. Now, the standard deviation. Once again, no clue, and an intuition. The intuition’s name is ‘coefficient of variability’, which I baptise ßfor the moment. Variability is the coefficient that you get when you divide standard deviation by the mean average value. Another proportion. The greater the ß, the more dispersed is my set of customers into different subsets: lifestyles, cities, neighbourhoods etc. Conversely, the smaller the ß, the more conformist is that population, with relatively more people sailing in the mainstream. I casually assume my variability to be found somewhere in 0,1 ≤ ß ≤ 2, with a step of 0,1. With µ = 84, that makes my Ω (another symbol for sigma, or standard deviation) fall into 0,1*84 ≤ Ω ≤ 2*84 <=> 8,4 ≤ Ω ≤ 168. At ß = 0,1 => Ω = 8,4my customers are boringly similar to each other, whilst at ß = 2 => Ω = 168they are like separate tribes.

In order to make my presentation simpler, I take three checkpoints in time, namely the end of each consecutive year out of the three. Denominated in months, it gives: the 12thmonth, the 24thmonth, and the 36thmonth. I Table 1, below, you can find the results: the percentage of the market I expect to absorb into EneFin, with the average time of behavioural change in my customers pegged at µ = 84, and at various degrees of disparity between individual behavioural changes.

Table 1 Simulation of absorption in the market, with the average time of behavioural change equal to µ = 84 months

Percentage of the market absorbed
Variability of the population Standard deviation with µ = 84 12th month 24 month 36 month
0,1 8,4 8,1944E-18 6,82798E-13 7,65322E-09
0,2 16,8 1,00458E-05 0,02% 0,23%
0,3 25,2 0,18% 0,86% 2,93%
0,4 33,6 1,02% 3,18% 7,22%
0,5 42 2,09% 5,49% 10,56%
0,6 50,4 2,92% 7,01% 12,42%
0,7 58,8 3,42% 7,80% 13,18%
0,8 67,2 3,67% 8,10% 13,28%
0,9 75,6 3,74% 8,09% 13,02%
1 84 3,72% 7,93% 12,58%
1,1 92,4 3,64% 7,67% 12,05%
1,2 100,8 3,53% 7,38% 11,50%
1,3 109,2 3,41% 7,07% 10,95%
1,4 117,6 3,28% 6,76% 10,43%
1,5 126 3,14% 6,46% 9,93%
1,6 134,4 3,02% 6,18% 9,47%
1,7 142,8 2,89% 5,91% 9,03%
1,8 151,2 2,78% 5,66% 8,63%
1,9 159,6 2,67% 5,42% 8,26%
2 168 2,56% 5,20% 7,91%

I think it is enough science for today. That sunlight will not enjoy itself. It needs me to enjoy it. I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog


Those a’s and b’s to put inside (a + b) when doing (a + b) power (p+q)

My editorial

I am finishing compiling notes for that article on the role of monetary systems in the transition towards renewable energies, at least I hope I am. This is a bit of a strange frame of mind when I hope I am. Could I be hoping I am not? Interesting question. Anyway, one of the ways I make sure I understand what I am writing about is to take a classic, whom I previously kind of attached to this particular piece of science I am trying to make, and I kind of filter my own thoughts and findings through that particular classic’s thoughts and findings. This time, Thomas Bayes is my classic. Didn’t have much to do with renewable energies, you would say? Weeeell, he was a philosopher and a mathematician, but he lived (and died) in the 18th century, when Europe was being powered by wind and water, thus, as a matter of fact, he had much to do with renewable energies. At the end of the 18th century, in my homeland – Southern Poland, and back in the day is was Austrian Galicia – there was one watermill per 382 people, on average.

And so I am rereading the posthumous article, attributed to reverend Thomas Bayes, received by Mr John Canton, an editor of ‘Philosophical Transactions’ at the Royal Society. On the 23rd of December, 1763, John Canton read a letter, sent from Newington-Green, on the 10th of November, by Mr Richard Price. The letter was being accompanied by an attachment, in the form of a dissertation on ‘the doctrine of chances’, allegedly found by Mr Price in the notes of a defunct friend, Thomas Bayes. The friend had been defunct for two years, at the time, which is quite intriguing in itself. Anyway, Mr Richard Price presented the dissertation as Thomas Bayes’ work, and this is how Bayesian statistics were born  (Bayes, Price 1763[1]). Just as a reminder: in Thomas Bayes’ world, we are talking about having p successes and q failures in p + q trials, in the presence of one single success being probable at the rate ‘a’, and the probability of a single failure being ‘b’. The general way of thinking about it, in this specific universe, is that we take the sum of probabilities, like (a + b), and we give it some depth by elevating it to the power p + q. We create a space of probability through developing the Newtonian binomial (a + b)p+q.

At this point it is useful to dig a little bit into the logic of the Newtonian binomial. When I do (a + b)p+q , Isaac Newton tells me to kind of climb a ladder towards q, one step at a time, and so I am climbing that ladder of failure. First, I consider full success, so my p successes are exactly equal to my n trials, and my failure count is q = 0. In this most optimistic case, the number of different ways I can have that full score of successes is equal to the binomial coefficient (pq/q!) = (p0/0!) = 1/1 = 1. I have just one way of being successful in every trial I take, whatever the number of trials, and whatever the probability of a single success. The probability attached to that one-million-dollar shot is (pq/q!)*ap. See that second factor, the ap.? The more successes I want the least probability I have them all. A probability is a fraction smaller than 1. When I elevate it to any integer, it gets smaller. If the probability of a single success is like fifty-fifty, thus a = 0,5, and I want 5 successes on 5 trials, and I want no failures at all, I can expect those five bull’s eyes with a probability of (50/0!)*0,55 = 0,55 = 0,03125. Now, if I want 7 successes on 7 trials, zero failures, my seven-on-seven-shots-in-the-middle-probability is equal to (70/0!)*0,57 = 0,57 = 0,0078125. See? All I wanted was two more points scored, seven on seven instead of five on five, and this arrogant Newtonian-Bayesian approach sliced my odds by four times.

Now, I admit I can tolerate one failure over n trials, and the rest has to be just pure success, and so my q = 1. I repeat the same procedure: (p1/1!)*ap-1b1. With the data I have just invented, 4 successes on 5 trials, with 0,5 odds of having a single success, so with a = b = 0.5, I have (41/1!) = 4 ways of having that precise compound score. Those 4 ways give me, at the bottom line, a compound probability of (41/1!)*0,54*0,51 = 4*0,54*0,51 = 0,125. Let’s repeat, just to make it sink. Seven trials, two failures, five successes, one success being as probable as one failure, namely a = b = 0,5. How many ways of having 5 successes and 2 failures do I have over 7 trials? I have (52/2!) = 12,5 them ways. How can I possibly have 12,5 ways of doing something? This is precisely the corkscrewed mind of Thomas Bayes: I have between 12 and 13 ways of reaching that particular score. The ‘between’ has become a staple of the whole Bayesian theory.

Now, I return to my sheep, as the French say. My sheep are renewable (energies). Let’s say I have statistics telling me that in my home country, Poland, I have 12,52% of electricity being generated from renewable sources, A.D. 2014. If I think that generating a single kilowatt-hour the green way is a success, my probability of single success, so P(p=1) = a = 0,1252. The probability of a failure is P(q=1) = b = 1 – 0,1252 = 0,8748. How many kilowatt-hours do I generate? Maybe just enough for one person, which, once again averaged, was 2495,843402 kg of oil equivalent or 29026,65877 kilowatt hour per year per capita (multiplied the oil of by 11,63 to get the kilowatt hours). Here, Thomas Bayes reminds me gently: ‘Mr Wasniewski, I wrote about the probability of having just a few successes and a few failures over a few plus a few equals a few total number trials. More than 29 thousands of those kilowatt-hours or whatever it is you want, it is really hard to qualify under ‘a few’. Reduce.’ Good, so I reduce into megawatt hours, and that gives me like n = 29.

Now, according to Thomas Bayes’ logic, I create a space of probabilities by doing (0,1252 + 0,8748)29. The biggest mistake I could make at this point would be to assume that 0,1252 + 0,8748 = 1, which is true, of course, but most impractical for creating spaces of probability. The right way of thinking about it is that I have two distinct occurrences, one marked 0,1252, the other marked 0,8748, and I project those occurrences into a space made of 29 dimensions. In this interesting world, where you have between six and eight ways of being late or being tall, I have like patches of probability. Each of those patches reflects my preferences. You want to have 5 megawatt hours, out of those 29, generated from renewable sources, Mr Wasniewski? As you please, that will make you odds of ((529-5/(29-5)!)*0,12525*0,874829-5 = 1,19236E-13 of reaching this particular score. The problem, Mr Wasniewski, is that you have only 0,000000096 ways of reaching it, which is a bit impractical, as ways come. Could be impossible to do, as a matter of fact.

So, when I create my multiverse of probability the Thomas Bayes way, some patches of probability turn out to be just impracticable. If I have like only 0,000000096 ways of doing something, I have a locked box, with the key to the lock being locked inside the box. No point in bothering about it. When I settle for 10 megawatt hours successfully generated from renewable sources, against 19 megawatt hours coming from them fossil fuels, the situation changes. I have ((1029-10)/(29-10)!) = 82,20635247, or rather between 82 and 83, although closer to 82 ways of achieving this particular result. The cumulative probability of 10 successes, which I can score in those 82,20635247 ways, is equal to ((1029-10)/(29-10)!)*0,125210*0,874829-10 =  0,0000013. Looks a bit like the probability of meeting an alien civilisation whilst standing on my head at 5 a.m. in Lisbon, but mind you, this is just one patch of probability, and I have more than 82 ways of hitting it. My (0,1252 + 0,8748)29 multiverse contains 29! = 8,84176E+30 such patches of probability, some of them practicable, like 10 megawatt hours out of 29, others not quite, like 5 megawatt hours over 29. Although Thomas Bayes wanted to escape the de Moivre – Laplace world of great numbers, he didn’t truly manage to. As you can see, patches of probability on the sides of this multiverse, with very few successes or very few failures, seem blinking red, like the ‘Occupied’ sign on the door to restrooms. Only those kind of balanced ones, close to successes and failures scoring close to fifty-fifty, yield more than one way of hitting them. Close to the mean, man, you’re safe and feasible, but as you go away from the mean, you can become less than one, kind of.

Thus, if I want to use the original Bayesian method in my thinking about the transition towards renewable energies, it is better to consider those balanced cases, which I can express in the form of just a few successes and a few failures. As tail events enter into my scope of research, so when I am really honest about it, I have to settle for the classical approach based on the mean, expected values, de Moivre – Laplace way. I can change my optic to use the Bayesian method more efficiently, though. I consider 5 local projects, in 5 different towns, and I want to assess the odds of at least 3 of them succeeding. I create my multiverse of probabilities as (0,1252 + 0,8748)3+2=5, which has the advantage of containing just 5! = 120 distinct patches of probability. Kind of more affordable. Among those 120 patches of probability, my target, namely 3 successful local projects out of 5 initiated, amounts to (32/2!) = 4,5 ways of doing it (so between 4 and 5), and all those alternative ways yield a compound probability of (32/2!)*0,12523*0,87472 = 0,006758387. Definitely easier to wrap my mind around it.

I said, at the beginning of the today’s update, that I am using Thomas Bayes’ theory as a filter for my findings, just to check my logic. Now, I see that the results of my quantitative tests, those presented in previous updates, should be transformed into simple probabilities, those a’s and b’s to put inside (a + b) when doing (a + b)p+q. My preferences as for successes and failures should be kept simple and realistic, better below 10.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

A race across target states, or Bayes and Nakamoto together

My editorial

And so I continue prodding my idea of local, green energy systems, with different theories of probability. The three inside me – my curious ape, my austere monk, and my happy bulldog – are having a conversation with two wise men: reverend Thomas Bayes, and Satoshi Nakamoto. If you need to keep track of my last updates, you can refer to ‘Time puts order in the happening’ as well as to ‘Thomas Bayes, Satoshi Nakamoto et bigos’. And so I am at the lemmas formulated by Thomas Bayes, and at the basic analytical model proposed by Nakamoto. Lemma #1 by Thomas Bayes says: ‘The probability that the point o will fall between any two points in the line AB is the ratio of the distance between the two points to the whole line AB’. Although Thomas Bayes provides a very abundant geometric proof to this statement, I think it is one of those things you just grasp intuitively. My chances of ever being at the coast of the Pacific Ocean are greater than those of ever visiting one tiny, coastal village in the Hawaii, just because the total coastline of the Pacific is much bigger an expanse than one, tiny, Hawaiian village. The bigger is my target zone in relation to the whole universe of probability, the greater is my probability of hitting the target. Now, in lemma #2, we read pretty much the same, just with some details added: ‘The ball W having been thrown, and the line os drawn, the probability of the event M in a single trial is the ratio of Ao to AB’.

I think a little reminder is due in relation to those two Bayesian lemmas. As for the detailed Bayes’s logic, you can refer to Bayes, Price 1763[1], and I am just re-sketching the landscape, now. The whole universe of probability, in Thomas Bayes’s method, is a flat rectangle ABCD, with corners being named clockwise, starting from A at the bottom right, as if that whole universe started around 4 o’clock. AB is kind of width of anything that can happen. Although this universe is a rectangle, it is essentially unidimensional, and AB is that dimension. I throw two balls, W and O. I throw W as the first, at the point where it lands in the rectangle ABCD becomes a landmark. I draw a line through that point, perpendicular to AB, crossing AB at the point o, and CD and the point s. The line os becomes the Mississippi river of that rectangle: from now on, two sub-universes emerge. There is that sub-universe of M happening, or success, namely of the second ball, the O, landing between the lines os and AD (in the East). On the other hand, there are all those strange things that happen on the other side of the line os, and those things are generally non-M, and they are failures to happen. The probability of the second ball O hitting M, or landing between the lines os and AD, is equal to p, or p = P(M). The probability of the ball O landing west of Mississippi, between the lines os and BC, is equal to q, and this is the probability of a single failure.

On the grounds of those two lemmas, Thomas Bayes states one of the most fundamental propositions of his whole theory, namely proposition #8: ‘If upon BA you erect a figure BghikmA, whose property is this, that (the base BA being divided into any two parts, as Ab and Bb and at the point of division b a perpendicular being erected and terminated by the figure in m; and y, x, r representing respectively the ratio of bm, Ab, and Bb to AB, and E being the coefficient of the term in which occurs ap*bq when the binomial [a + b]p + q is expanded) y = E*xp*rq. I say that before the ball W is thrown, the probability the point o should fall between f and b, any two points named in the line AB, and that the event M should happen p times and fail q [times] in p + q = n trials, is the ratio of fghikmb, the part of the figure BghikmA intercepted between the perpendiculars fg, bm, raised upon the line AB, to CA the square upon AB’.

Right, I think that with all those lines, points, sections, and whatnot, you could do with some graphics. Just click on this link to the original image of the Bayesian rectangle and you will see it as I tried to recreate it from the original. I think I did it kind of rectangle-perfectly. Still, according to my teachers of art, at school, my butterflies could very well be my elephants, so be clement in your judgment. Anyway, this is the Bayesian world, ingeniously reducing the number of dimensions. How? Well, in a rectangular universe ABCD, anything that can happen is basically described by the powers ABBC or BCAB. Still, if I assume that things happen just kind of on one edge, the AB, and this happening is projected upon the opposite edge CD, and the remaining two edges, namely BC and DA, just standing aside and watching, I can reduce a square problem to a linear one. I think this is the whole power of geometry in mathematical thinking. Whilst it would be foolish to expect rectangular universes in our everyday life, it helps in dealing with dimensions.

Now, you can see the essence of the original Bayesian approach: imagine a universe of occurrences, give it some depth by adding dimensions, then give it some simplicity by taking some dimensions away from it, and map your occurrences in thus created an expanse of things that can happen. Now, I jump to Satoshi Nakamoto and his universe. I will quote, to give an accurate account of the original logic: ‘The success event is the honest chain being extended by one block, increasing its lead by +1, and the failure event is the attacker’s chain being extended by one block, reducing the gap by -1. The probability of an attacker catching up from a given deficit is analogous to a Gambler’s Ruin problem. Suppose a gambler with unlimited credit starts at a deficit and plays potentially an infinite number of trials to try to reach breakeven. We can calculate the probability he ever reaches breakeven, or that an attacker ever catches up with the honest chain, as follows:

p = probability an honest node finds the next block

q = probability the attacker finds the next block

qz = probability the attacker will ever catch up from z blocks behind

Now, I rephrase slightly the original Nakamoto’s writing, as the online utilities I am using on my mutually mirroring blogs – and – are not really at home with displaying equations. And so, if p ≤ q, then qz = 1. If, on the other hand, p > q, my qz = (q/p)z. As I mentioned it in one of my previous posts, I use the original Satoshi Nakamoto’s thinking in the a contrario way, where my idea of local green energy systems is the Nakamoto’s attacker, and tries to catch up, on the actual socio-economic reality from z blocks behind. For the moment, and basically fault of a better idea, I assume that my blocks can be carved in time or in capital. I explain: catching from z blocks behind might mean catching in time, like from a temporal lag, or catching up across the expanse of the capital market. I take a local community, like a town, and I imagine its timeline over the 10 years to come. Each unit of time (day, week, month, year) is one block in the chain. Me, with my new idea, I am the attacker, and I am competing with other possible ideas for the development and/or conservation of that local community. Each idea, mine and the others, tries to catch over those blocks of time. The Nakamoto’s logic allows me to guess the right time frame, in the first place, and my relative chances in competition. Is there any period of time, over which I can reasonably expect my idea to take over the whole community, sort of qz = 1 ? This value z can also be my time advantage over other projects. If yes, this will be my maximal planning horizon. If not, I just simulate my qz with different extensions of time (different values of z), and I try to figure out how does my odds change as z changes.

If, instead of moving through time, I am moving across the capital market, my initial question changes: is there any amount of capital, like any amount z of capital chunks, which makes my qz = 1 ? If yes, what is it? If no, what schedule of fundraising should I adopt?

Mind you, this is a race: the greater my z, the lower my qz. The more time I have to cover in order to have my project launched, the lower my chances to ever catch on. This is a notable difference between the Bayesian framework and that by Satoshi Nakamoto. The former says: your chances to succeed grow as the size of your target zone grows in relation to everything that can possibly happen. The more flexible you are, the greater are your chances of success. On the other hand, in the Nakamoto’s framework, the word of wisdom is different: the greater your handicap over other projects, ideas, people and whatnot, in terms of time or resources to grab, the lower your chances of succeeding. The total wisdom coming from that is: if I want to design a business plan for those local, green energy systems, I have to imagine something flexible (a large zone of target states), and, in the same time, something endowed with pretty comfortable a pole position over my rivals. I guess that, at this point, you will say: good, you could have come to that right at the beginning. ‘Be flexible and gain some initial advantage’ is not really science. This is real life. Yes, but what I am trying to demonstrate is precisely the junction between the theory of probability and real life.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

Time puts order in the happening

My editorial

I am developing on what I have done so far. The process, I believe, is called ‘living’, in general, but I am approaching just a tiny bit of it, namely my latest developments on making a local community run at 100% on green energy (see my latest updates “Conversations between the dead and the living (no candles)” and ‘Quelque chose de rationnellement prévisible’). I am working with the logic of Bayesian statistics, and more specifically with the patient zero of this intellectual stream, reverend Thomas Bayes in person (Bayes, Price 1763[1]). I have those four conditions, which, taken together, define my success:

Q(RE) = S(RE) = D(E) << 100% of energy from local green sources


P(RE) ≤ PP(E) << price of renewable energy, within individual purchasing power


ROA ≥ ROA* << return on assets from local green installations superior or equal to a benchmark value


W/M(T1) > W/M(T0) << a local virtual currency based on green energy takes on the market, progressively

Now, as I study the original writing by Thomas Bayes, and as I read his geometrical reasoning, I think I should stretch a little the universe of my success. Stretching universes allows a better perspective. Thomas Bayes defines the probability of a p successes and q failures in p + q = n trials as E*ap*bq, where a and b are the simple probabilities of, respectively, p and q happening just once, and E is the factor of ap*bq, when you expand the binomial (a + b)p+q. That factor is equal to E = pq/q!, by the way. Thank you, Isaac Newton. Thank you, Blaise Pascal. Anyway, if I define my success as just one success, so if I take p = 1, it makes no sense. That Bayesian expression tends to yield a probability of success equal to 100%, in such cases, which, whilst comforting in some way, sounds just stupid. A universe made of one hypothetical success, and nothing but failures fault of success, seems a bit rigid for the Bayesian approach.

And so I am thinking about applying those four conditions to individuals, and not necessarily to whole communities. I mean, my success would be one person fulfilling all those conditions. Let’s have a look. Conditions 1 and 2, no problem. One person can do Q(RE) = S(RE) = D(E), or consume as much energy as they need and all that in green. One person can also easily P(RE) ≤ PP(E) or pay for that green energy no more than their purchasing power allows. With condition 4, it becomes tricky. I mean, I can imagine that one single person uses more and more of the Wasun, or that local cryptocurrency, and that more and more gets bigger and bigger when compared to the plain credit in established currency that the same person is using. Still, individual people hold really disparate monetary balances: just compare yourself to Justin Bieber and you will see the gap. In monetary balances of significantly different a size, structure can differ a lot, too. Thus, whilst I can imagine an individual person doing W/M(T1) > W/M(T0), that would take a lot of averaging. As for condition 3, or ROA ≥ ROA*, I think that it just wouldn’t work at the individual level. Of course, I could do all that sort of gymnastics like ‘what if the local energy system is a cooperative, what if every person in the local community has some shares in it, what if their return on those shares impacted significantly their overall return on assets etc.’ Honestly, I am not feeling the blues, in this case. I just don’t trust too many whatifs at once. ROA is ROA, it is an accounting measure, I like it solid and transparent, without creative accounting.

Thus, as I consider stretching my universe, some dimensions look more stretchable than others. Happens all the time, nothing to inform the government about, and yet educative. The way I formulate my conditions of success impacts the way I can measure the odds of achieving it. Some conditions are more flexible than others, and those conditions are more prone to fancy mathematical thinking. Those stiff ones, i.e. not very stretchable, are something the economists don’t really like. They are called ‘real options’ or ‘discreet variables’ and they just look clumsy in a model. Anyway, I am certainly going to return to that stretching of my universe, subsequently, but now I want to take a dive into the Bayesian logic. In order to get anywhere, once immersed, I need to expand that binomial: (a + b)p+q. Raising anything to a power is like meddling with the number of dimensions the thing stretches along. Myself, for example, raised to power 0.75, or ¾, means that first, I gave myself a three-dimensional extension, which I usually pleasantly experience, and then, I tried to express this three-dimension existence with a four-dimensional denominator, with time added to the game. As a result, after having elevated myself to power 0.75, I end up with plenty of time I don’t know what to do with. Somehow familiar, but I don’t like it. Dimensions I don’t know what to do with look like pure waste to me. On the whole, I prefer elevating myself to integers. At least, I stay in control.

This, in turn, suggests a geometrical representation, which I indeed can find with Thomas Bayes. In Section II of this article, Thomas Bayes starts with writing the basic postulates: ‘Postulate 1. I suppose the square table or plane ABCD to be so levelled that if either of the balls O or W be thrown upon it, there shall be the same probability that it rests upon any one equal part of the plane or another, and that it must necessarily rest somewhere upon it. Postulate 2. I suppose that the ball W will be first thrown, and through the point where it rests a line ‘os’ shall be drawn parallel to AD, and meeting CD and AB in s and o; and that afterwards the ball O will be thrown p + q = n times, and that its resting between AD and os after a single throw be called the happening of the event M in a single trial’. OK, so that’s the original universe by reverend Bayes. Interesting. A universe is defined, with a finite number of dimensions. Anyway, as I am an economist, I will subsequently reduce any number of dimensions to just two, as reverend Bayes did. As my little example of elevating myself to power 0.75 showed, there is no point in having more dimensions than you can handle. Two is fine.

In that k-dimensional universe, two events happen, in a sequence. The first one is the peg event: it sets a reference point, and a reference tangent. That tangent divides the initial universe into two parts, sort of on the right of the Milky Way as opposed to all those buggers on the left of it. The, the second event happens, and this one is me in action: I take n trials with p successes and q failures. Good. As I am quickly thinking about it, it gives me always one extra dimension over the k dimensions in my universe. That extra dimension is order rather than size. In the original notation by Thomas Bayes, he has two dimensions in his square, and then time happens, and two events happen in that time. Time puts order in the happening of the two events. Hence, that extra dimension should be sort of discrete, with well-defined steps and no available states in between. I have two states of my k-dimensional universe: state sort of 1 with just the peg event in it, and sort of state 2, with my performance added inside. State 1 narrows down the scope of happening in state 2, and I want to know the odds of state 2 happening within that scope.

Now, I am thinking about ball identity. I mean, what could make that first, intrepid ball W, which throws itself head first to set the first state of my universe. From the first condition, I take the individual demand for energy: D(E). The second condition yields individual purchasing power regarding energy PP(E), the third one suggests the benchmark value regarding the return on assets ROA*. I have a bit of a problem with the fourth condition, but after some simplification I think that I can take time, just as reverend Bayes did. My W ball will be the state of things at the moment T0, regarding the monetary system, or W/M(T0). Good, so my universe can get some order through four moves, in which I set four peg values, taken from the four conditions. The extra dimension in my universe is precisely the process of setting those benchmarks.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

Conversations between the dead and the living (no candles)

My today’s editorial

I have been away from blogging for two days. I have been finishing that article about technological change seen from an evolutionary perspective, and I hope I have finished, at least as the raw manuscript. If you are interested, you can download it from  Research Gate or from my own website with Word Press. Now, as the paper is provisionally finished, I feel like having an intellectual stroll, possibly in the recent past. I am tempted to use those evolutionary patterns of thinking to something I had been quite busy with a few months ago, namely to the financial tools, including virtual currencies, as a means to develop new technologies. I had been particularly interested in the application of virtual currencies to the development of local power systems based on renewable energies, but in fact, I can apply the same frame of thinking to any technology, green energy or else. Besides, as I was testing various empirical models to represent evolutionary change in technologies, monetary variables frequently poked their head through some hole, usually as correlates to residuals.

So, I return to money. For those of my readers who would like to refresh their memory or simply get the drift of that past writing of mine, you can refer, for example, to ‘Exactly the money we assume’  or to  ‘Some insights into Ethereum whilst insulating against bullshit’, as well as to other posts I placed around that time. Now, I want to move on and meddle a bit with Bayesian statistics, and more exactly with the source method presented in the posthumous article by reverend Thomas Bayes (Bayes, Price 1763[1]), which, by the way, you can get from the JSTOR library via this link . I want to both wrap my mind around Thomas Bayes’s way of thinking, and refresh my own thinking about monetary systems. I have that strange preference to organize conversations between the dead and the living (no candles), so I feel like put reverend Bayes in conversation with Satoshi Nakamoto, the semi-mythical founding father of the Bitcoin movement, whose article, that you can download by this link, from my Word Press website, contains some mathematical analysis, based on the Poisson probability.

My initial question, the one I had been wrestling with this Spring, was the following: how can a local community develop a local system of green energy, and a local virtual currency, and how can these two help the development or the transformation of said local community? Why do I bother, posthumously, revered Thomas Bayes with this question? Well, because this is what he stated as the purpose of his article. In the general formulation of the problem, he wrote: ‘Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability than can be named’. The tricky part in this statement is the ‘unknown’ part. When we studied probabilities at high school (yes, some of us didn’t take a nap during those classes!), one of the first things we were taught to do was to define exactly the event that we want to assess the probability of happening. You remember? Read balls vs. black balls, in a closed box? Rings a bell? Well, Thomas Bayes stated a different problem: how to tackle the probability that something unknown happens? Kind of a red ball cross-bred with a black ball, with a hint of mésalliance with a white cube, in family records. In the last, concluding paragraph of his essay, Thomas Bayes wrote: ‘But what recommends the solution in this Essay is that it is complete in those cases where information is most wanted, and where Mr De Moivre’s solution of the inverse problem can give little or no direction, I mean, in all cases where either p or q are of no considerable magnitude. In other cases, or when both p and q are very considerable, it is not difficult to perceive the truth of what has been here demonstrated, or that there is reason to believe in general that the chances for the happening of an event are to the chances for its failure in the same ratio with that of p to q. But we shall be greatly deceived if we judge in this manner when either p or q are small. And though in such cases the Data are not sufficient to discover the exact probability of an event, yet it is very agreeable to be able to find the limits between which it is reasonable to think it must lie, and also to be able to determine the precise degree of assent which is due to any conclusions or assertions relating to them’.

Before I go further: in the original notation by Thomas Bayes, p and q are the respective numbers of successes and failures, and not probabilities. Especially if you are a native French speaker, you might have learnt, at school, p and q as probabilities, so be on your guard. You’d better always be on your guard, mind you. You never know where your feet can lead you. So, I am bothering late reverend Bayes because he was investigating the probability of scoring a relatively small number of successes in a relatively small number of trials. If you try to launch a new technology, locally, how many trials can you have? I mean, if your investors are patient, they can allow some trial and error, but in reasonable amounts. You also never know for sure what does the reasonable amount of trial and error mean for a given investor. You have the unknown event, see? Just as Thomas Bayes stated his problem. So I take my local community, I make a perfect plan, with a plan B possibly up our local sleeve, I take some risks, and then someone from the outside world wants to assess the odds that I succeed. The logic by Thomas Bayes can be a path to follow.

Satoshi Nakamoto, in that foundational article about the idea of the Bitcoin, treated mostly the issues of security. Still, he indirectly gives an interesting insight concerning the introduction of new inventions in an essentially hostile environment. When the simulates a cyberattack on a financial system, he uses the general framework of Poisson probability to assess the odds that an intruder from outside can take over a network of mutually interacting nodes. I am thinking about inverting his thinking, i.e. about treating the introduction of a new technology, especially in a local community, as an intrusion from outside. I could threat Nakamoto’s ‘honest nodes’ as the conservatives in the process, resisting novelty, and the blocks successfully attacked by the intruder would be the early adopters. Satoshi Nakamoto used the Poisson distribution to simulate that process and here he meets reverend Bayes, I mean, metaphorically. The Poisson distribution is frequently called as the ‘probability of rare events’, and uses the same general framework than the original Bayesian development: something takes place n times in total, in p cases that something is something we wish to happen (success), whilst in q cases it is utter s**t happening (failure), and we want to calculate the compound probability of having p successes and q failures in n trials. By the way, if you are interested in the original work by Simeon Denis Poisson, a creative French, who, technically being a mathematician, tried to be very nearly everything else, I am placing on my Word Press site two of his papers: the one published in 1827 and that of 1832 (presented for the first time in 1829).

And so I have that idea of developing a local power system, based on green energies, possibly backed with a local virtual currency, and I want to assess the odds of success.  Both the Bayesian thinking, and the Poisson’s one are sensitive to how we define, respectively, success and failure, and what amount of uncertainty we leave in this definition. In business, I can define my success in various metrics: size of the market covered with my sales, prices, capital accumulated, return on that capital etc. This is, precisely, the hurdle to jump when we pass from the practice of business to its theoretical appraisal: we need probabilities, and in order to have probabilities, we need some kind of event being defined, at least foggily. What’s a success, here? Let’s try the following: what I want is a local community entirely powered with locally generated, renewable energies, in a socially and financially sustainable manner.

‘Entirely powered’ means 100%. This one is simple. Then, I am entering the dark forest of assumptions. Let’s say that ‘socially sustainable’ means that every member of the local community should have that energy accessible within their purchasing power. ‘Financially sustainable’ is trickier: investors can be a lot fussier than ordinary folks, regarding what is a good deal and what isn’t. Still, I do not know, a priori, who those investors could possibly be, and so I take a metric, which leaves a lot of room for further interpretation, namely the rate of return on assets. I prefer the return on assets (ROA) to the rate of return on equity (ROE), because for the latter I would have to make some assumptions regarding the capital structure of the whole thing, and I want as weak a set of assumptions as possible. I assume that said rate of return on assets should be superior or equal to a benchmark value. By the way, weak assumptions in science are the exact opposite of weak assumptions in life. In life, weak assumptions mean I am probably wrong because I assumed too much. In science, weak assumptions are probably correct, because I assumed just a little, out of the whole expanse of what I could have assumed.

Right. Good. So what I have, are the following variables: local demand for energy D(E), local energy supply from renewable sources S(RE), price of renewable energy P(RE), purchasing power regarding energy PP(E), and rate of return on assets (ROA). With these, I form my conditions. Condition #1: the local use of energy is a local equilibrium between the total demand for energy and the supply of energy from renewable sources: Q(RE) = S(RE) = D(E). Condition #2: price of renewable energy is affordable, or: P(RE) ≤ PP(E). Condition #3: the rate of return on assets is greater than or equal to a benchmark value: ROA ≥ ROA*. That asterisk on the right side of that last condition is the usual symbol to show something we consider as peg value. Right, I use the asterisk in other types of elaborate expressions, like s*** or f***. The asterisk is the hell of a useful symbol, as you can see.

Now, I add that idea of local, virtual currency based on green energies. Back in the day, I used to call it ‘Wasun’, a play on words ‘water’ and ‘sun’. You can look up  ‘Smart grids and my personal variance’  or  ‘Les moulins de Wasun’ (in French) in order to catch a bit (again?) on my drift. I want a local, virtual currency being a significant part of the local monetary system. I define ‘significant part’ as an amount likely to alter the supply of credit, in established currency, in the local market. I use that old trick of the supply of credit being equal to the supply of money, and so being possible to symbolize with M. I assign the symbol ‘W’ to the local supply of the Wasun. I take two moments in time: the ‘before’, represented as T0, with T1 standing for the ‘after’. I make the condition #4: W/M(T1) > W/M(T0).

Wrapping it up, any particular event falling into:

Q(RE) = S(RE) = D(E)

P(RE) ≤ PP(E)


W/M(T1) > W/M(T0)

… is a success. Anything outside those triple brackets is a failure. Now, I can take three basic approaches in terms of probability. Thomas Bayes would assume a certain number n of trials, look for the probability of all the four conditions being met in one single trial, and then would ask me how many trials (p) I want to have successful, out of n. Simeon Denis Poisson would rather have taken an interval of time, and then would have tried to assess the probability of having all the four conditions met at least once in that interval of time. Satoshi Nakamoto would make up an even different strategy. He would assume that my project is just one of the many going on in parallel in that little universe, and would assume that other projects try to achieve their own conditions of success, similar to mine or different, as I try to do my thing. The next step would to be to define, whose success would be my failure, and then I would have to compute the probability of my success in the presence of those competing projects. Bloody complicated. I like it. I’m in.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418