The path of thinking, which has brought me to think what I am thinking now

My editorial

I am thinking about the path of research to take from where I am now. A good thing in the view of defining that path would be to know exactly where am I now, mind you. I feel like summarising a chunk of my work, approximately the three last weeks, maybe more. As I finished that article about technological change seen as an intelligent, energy-maximizing adaptation , I kind of went back to my idea of local communities being powered at 100% by renewable energies. I wanted to set kind of scientific foundations for a business plan that a local community could use to go green at 100%. More or less intuitively, I don’t really know why exactly, I connected this quite practical idea to Bayesian statistics, and I went straight for the kill, so to say, by studying the foundational paper of this whole intellectual stream, the one from 1763 (Bayes, Price 1763[1]). I wanted to connect the idea of local communities based entirely on renewable energies to that of a local cryptocurrency (i.e. based on the Blockchain technology), somehow attached to the local market of energy. As I made this connection, I kind of put back to back the original paper by Thomas Bayes with that by Satoshi Nakamoto, the equally mysterious intellectual father of the Bitcoin. Empirically, I did some testing at the level of national data about the final consumption of energy, and about the primary output of electricity, I mean about the share of renewable energy in these. What I have, out of that empirical testing, is quite a lot of linear models, where I multiple-regress the shares, or the amounts, of renewable energies on a range of socio-economic variables. Those multiple regressions brought some seemingly solid stuff. The share of renewable energies in the primary output of electricity is closely correlated with the overall dynamics in the final consumption of energy: the faster the growth of that total market of energy, the greater the likelihood of shifting the production of electricity towards renewables. As dynamics are concerned, the years 2007 – 2008 seem to have marked some kind of threshold: until then, the size of the global market in renewable energies had used to grow at slower a pace than the total market of energy, whilst since then, those paces switched, and the renewables started to grow faster than the whole market. I am still wrapping my mind around that fact. The structure of economic input, understood in terms of the production function, matters as well. Labour-intensive societies seem to be more prone to going green in their energy base than the capital-intensive ones. As I was testing those models, I intuitively used the density of population as control variable. You know, that variable, which is not quite inside the model, but kind of sitting by and supervising. I tested my models in separate quantiles of density in population, and some interesting distinctions came out of it. As I tested the same model in consecutive sextiles of density in population, the model went through a cycle of change, with the most explanatory power, and the most robust correlations occurring in the presence of the highest density in population.

I feel like asking myself why have I been doing what I have been doing. I know, for sure, that the ‘why?’ question is abyssal, and a more practical way of answering it consists in hammering it into a ‘how?’. What has been my process? Step 1: I finish an article, and I come to the conclusion that I can discuss technological change in the human civilisation as a process of absorbing as much energy as we can, and of adapting to maximise that absorption through an evolutionary pattern similar to sexual selection. Step 2: I blow some dust off my earlier idea of local communities based on renewable energies. What was the passage from Step 1 to Step 2? What had been crystallising in my brain at the time? Let’s advance step by step. If I think about local communities, I am thinking about a dispersed structure, kind of a network, made of separate and yet interconnected nodes. I was probably trying to translate those big, global paradigms, which I had identified before, into local phenomena, the kind you can experience whilst walking down the street, starting a new small business, or looking for a new job. My thinking about local communities going 100% green in their energy base could be an expression of an even deeper and less articulate a thinking about how do we, humans, in our social structure, maximize that absorption of energy I wrote about in my last article.

Good, now Step 3: I take on the root theory of Bayesian statistics. What made me take that turn? I remember I started to read that paper by pure curiosity. I like reading the classics, very much because only by reading them I discover how much bulls*** has been said about their ideas. What attracted my attention, I think, in the original theory by Thomas Bayes, was that vision of a semi-ordered universe, limited by previous events, and the attempt to assess the odds of having a predictable number of successes over quite a small number of trials, a number so small that it defies the logic of expected values in big numbers, genre De Moivre – Laplace. I was visibly thinking about people, in local communities, making their choices, taking a limited number of trials at achieving some outcome, and continuing or giving up, according to said outcomes. I think I was trying, at the time, to grasp the process of maximizing the absorption of energy as a sequence of individual and collective choices, achieved through trial and error, with that trial and error chaining into itself, i.e. creating a process marked by hysteresis.

Step 4: putting the model of the Bitcoin, by Satoshi Nakamoto, back to back with the original logic by Thomas Bayes. The logic used by Satoshi Nakamoto, back in the day, was that of a race, inside a network, between a crook trying to abuse the others, and a chained reaction from the part of ‘honest’ nodes. The questions asked were: how quick does a crook has to be in order to overcome the chained reaction of the network? How big and how quick on the uptake does the network has to be in order to fend the crook off? I was visibly thinking about rivalling processes, where rivalry sums up to overtaking and controlling some kind of consecutive nodes in a network. What kind of processes could I have had in mind? Well, the most obvious choice are the processes of absorbing energy: we strive to maximise our absorption of energy, we have the choice between renewable energies and the rest (fossils plus nuclear), and those choices are chained, and they are chained so as to unfold in time at various speeds. I think that when I put Thomas Bayes and Satoshi Nakamoto on the same school bench, the undertow of my thinking was something like: how do the choices we make influence further choices we make, and how does that chain of choices impact the speed the market of renewable energy develops, as compared to the market of other energy sources?

Step 5: empirical tests, those multiple regressions in a big database made of ‘country – year’ observations. Here, at least, I am pretty much at home with my own thinking: I know I habitually represent in my mind those big economic measures, like GDP per capita, or density of population, or the percentage of green energy in my electric socket, as the outcome of complex choices made by simple people, including myself. As I did that regressing, I probably, subconsciously, wanted to understand how some type of economic choices we make impacts other types of choices, more specifically those connected to energy. I found some consistent patterns at this stage of research. Choices about the work we do and about professional activity, and about the wages we pay and receive, are significant to the choices about energy. The very basic choice to live in a given place, so to cluster together with other humans, has one word or two to say, as well. The choices we make about consuming energy, and more specifically the choice of consuming more energy than the year before, are very important for the switch towards the renewables. Now, I noticed that turning point, in 2007 – 2008. Following the same logic, 2007 – 2008 must have been the point in time, where the aggregate outcomes of individual decisions concerning work, wages, settlement and the consumption of energy summed up into a change observable at the global scale. Those outcomes are likely to come out, in fact, from a long chain of choices, where the Bayesian space of available options has been sequentially changing under the impact of past choices, and where the Bitcoin-like race of rivalling technologies took place.

Step 6: my recent review of literature about the history of technology showed me a dominant path of discussion, namely that of technological determinism, and, kind of on the margin of that, the so-called Moore’s law of exponentially growing complexity in one particular technology: electronics. What did I want to understand by reviewing that literature? I think I wanted some ready-made (well, maybe bespoke) patterns, to dress my empirical findings for posh occasions, such as a conference, an article, or a book. I found out, with surprise, that the same logic of ‘choice >> technology >> social change >> choice etc.’ has been followed by many other authors and that it is, actually, the dominant way of thinking about the history of technology. Right, this is the path of thinking, which has brought me to think what I am thinking now. Now, what questions to I want to answer, after this brief recapitulative? First of all, how to determine the Bayesian rectangle of occurrences, regarding the possible future of renewable energies, and what that rectangle is actually likely to be? Answering this question means doing something we, economists, are second to none at doing poorly: forecasting. Splendid. Secondly, how does that Bayesian rectangle of limited choice depend on the place a given population lives in, and how does that geographical disparity impact the general scenario for our civilisation as a whole? Thirdly, what kind of social change is likely to follow along?

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

A race across target states, or Bayes and Nakamoto together

My editorial

And so I continue prodding my idea of local, green energy systems, with different theories of probability. The three inside me – my curious ape, my austere monk, and my happy bulldog – are having a conversation with two wise men: reverend Thomas Bayes, and Satoshi Nakamoto. If you need to keep track of my last updates, you can refer to ‘Time puts order in the happening’ as well as to ‘Thomas Bayes, Satoshi Nakamoto et bigos’. And so I am at the lemmas formulated by Thomas Bayes, and at the basic analytical model proposed by Nakamoto. Lemma #1 by Thomas Bayes says: ‘The probability that the point o will fall between any two points in the line AB is the ratio of the distance between the two points to the whole line AB’. Although Thomas Bayes provides a very abundant geometric proof to this statement, I think it is one of those things you just grasp intuitively. My chances of ever being at the coast of the Pacific Ocean are greater than those of ever visiting one tiny, coastal village in the Hawaii, just because the total coastline of the Pacific is much bigger an expanse than one, tiny, Hawaiian village. The bigger is my target zone in relation to the whole universe of probability, the greater is my probability of hitting the target. Now, in lemma #2, we read pretty much the same, just with some details added: ‘The ball W having been thrown, and the line os drawn, the probability of the event M in a single trial is the ratio of Ao to AB’.

I think a little reminder is due in relation to those two Bayesian lemmas. As for the detailed Bayes’s logic, you can refer to Bayes, Price 1763[1], and I am just re-sketching the landscape, now. The whole universe of probability, in Thomas Bayes’s method, is a flat rectangle ABCD, with corners being named clockwise, starting from A at the bottom right, as if that whole universe started around 4 o’clock. AB is kind of width of anything that can happen. Although this universe is a rectangle, it is essentially unidimensional, and AB is that dimension. I throw two balls, W and O. I throw W as the first, at the point where it lands in the rectangle ABCD becomes a landmark. I draw a line through that point, perpendicular to AB, crossing AB at the point o, and CD and the point s. The line os becomes the Mississippi river of that rectangle: from now on, two sub-universes emerge. There is that sub-universe of M happening, or success, namely of the second ball, the O, landing between the lines os and AD (in the East). On the other hand, there are all those strange things that happen on the other side of the line os, and those things are generally non-M, and they are failures to happen. The probability of the second ball O hitting M, or landing between the lines os and AD, is equal to p, or p = P(M). The probability of the ball O landing west of Mississippi, between the lines os and BC, is equal to q, and this is the probability of a single failure.

On the grounds of those two lemmas, Thomas Bayes states one of the most fundamental propositions of his whole theory, namely proposition #8: ‘If upon BA you erect a figure BghikmA, whose property is this, that (the base BA being divided into any two parts, as Ab and Bb and at the point of division b a perpendicular being erected and terminated by the figure in m; and y, x, r representing respectively the ratio of bm, Ab, and Bb to AB, and E being the coefficient of the term in which occurs ap*bq when the binomial [a + b]p + q is expanded) y = E*xp*rq. I say that before the ball W is thrown, the probability the point o should fall between f and b, any two points named in the line AB, and that the event M should happen p times and fail q [times] in p + q = n trials, is the ratio of fghikmb, the part of the figure BghikmA intercepted between the perpendiculars fg, bm, raised upon the line AB, to CA the square upon AB’.

Right, I think that with all those lines, points, sections, and whatnot, you could do with some graphics. Just click on this link to the original image of the Bayesian rectangle and you will see it as I tried to recreate it from the original. I think I did it kind of rectangle-perfectly. Still, according to my teachers of art, at school, my butterflies could very well be my elephants, so be clement in your judgment. Anyway, this is the Bayesian world, ingeniously reducing the number of dimensions. How? Well, in a rectangular universe ABCD, anything that can happen is basically described by the powers ABBC or BCAB. Still, if I assume that things happen just kind of on one edge, the AB, and this happening is projected upon the opposite edge CD, and the remaining two edges, namely BC and DA, just standing aside and watching, I can reduce a square problem to a linear one. I think this is the whole power of geometry in mathematical thinking. Whilst it would be foolish to expect rectangular universes in our everyday life, it helps in dealing with dimensions.

Now, you can see the essence of the original Bayesian approach: imagine a universe of occurrences, give it some depth by adding dimensions, then give it some simplicity by taking some dimensions away from it, and map your occurrences in thus created an expanse of things that can happen. Now, I jump to Satoshi Nakamoto and his universe. I will quote, to give an accurate account of the original logic: ‘The success event is the honest chain being extended by one block, increasing its lead by +1, and the failure event is the attacker’s chain being extended by one block, reducing the gap by -1. The probability of an attacker catching up from a given deficit is analogous to a Gambler’s Ruin problem. Suppose a gambler with unlimited credit starts at a deficit and plays potentially an infinite number of trials to try to reach breakeven. We can calculate the probability he ever reaches breakeven, or that an attacker ever catches up with the honest chain, as follows:

p = probability an honest node finds the next block

q = probability the attacker finds the next block

qz = probability the attacker will ever catch up from z blocks behind

Now, I rephrase slightly the original Nakamoto’s writing, as the online utilities I am using on my mutually mirroring blogs – and – are not really at home with displaying equations. And so, if p ≤ q, then qz = 1. If, on the other hand, p > q, my qz = (q/p)z. As I mentioned it in one of my previous posts, I use the original Satoshi Nakamoto’s thinking in the a contrario way, where my idea of local green energy systems is the Nakamoto’s attacker, and tries to catch up, on the actual socio-economic reality from z blocks behind. For the moment, and basically fault of a better idea, I assume that my blocks can be carved in time or in capital. I explain: catching from z blocks behind might mean catching in time, like from a temporal lag, or catching up across the expanse of the capital market. I take a local community, like a town, and I imagine its timeline over the 10 years to come. Each unit of time (day, week, month, year) is one block in the chain. Me, with my new idea, I am the attacker, and I am competing with other possible ideas for the development and/or conservation of that local community. Each idea, mine and the others, tries to catch over those blocks of time. The Nakamoto’s logic allows me to guess the right time frame, in the first place, and my relative chances in competition. Is there any period of time, over which I can reasonably expect my idea to take over the whole community, sort of qz = 1 ? This value z can also be my time advantage over other projects. If yes, this will be my maximal planning horizon. If not, I just simulate my qz with different extensions of time (different values of z), and I try to figure out how does my odds change as z changes.

If, instead of moving through time, I am moving across the capital market, my initial question changes: is there any amount of capital, like any amount z of capital chunks, which makes my qz = 1 ? If yes, what is it? If no, what schedule of fundraising should I adopt?

Mind you, this is a race: the greater my z, the lower my qz. The more time I have to cover in order to have my project launched, the lower my chances to ever catch on. This is a notable difference between the Bayesian framework and that by Satoshi Nakamoto. The former says: your chances to succeed grow as the size of your target zone grows in relation to everything that can possibly happen. The more flexible you are, the greater are your chances of success. On the other hand, in the Nakamoto’s framework, the word of wisdom is different: the greater your handicap over other projects, ideas, people and whatnot, in terms of time or resources to grab, the lower your chances of succeeding. The total wisdom coming from that is: if I want to design a business plan for those local, green energy systems, I have to imagine something flexible (a large zone of target states), and, in the same time, something endowed with pretty comfortable a pole position over my rivals. I guess that, at this point, you will say: good, you could have come to that right at the beginning. ‘Be flexible and gain some initial advantage’ is not really science. This is real life. Yes, but what I am trying to demonstrate is precisely the junction between the theory of probability and real life.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

Thomas Bayes, Satoshi Nakamoto et bigos

Mon éditorial

J’hésite entre continuer à explorer la logique mathématique de Thomas Bayes (Bayes, Price 1763[1]), et celle de Satoshi Nakamoto, le fondateur mystérieux de Bitcoin.. Je me dis qu’il serait intéressant d’être bien polonais, cette fois. Chez nous, en Pologne, nous avons un plat appelé « bigos » : un peu comme la choucroute française, mais avec plus de prédilection pour mélanger des ingrédients divers, dans une base faite de choux cuit. Du choux cuit, ça a une odeur si forte que quoi que vous y ajoutiez servira à mitiger et affiner. Mes choux c’est l’idée de systèmes énergétiques locaux basés sur les énergies renouvelables (choux) et la théorie de probabilité c’est l’eau pour le cuire. Je pense qu’il est intéressant de mélanger, dans cette base, Thomas Bayes et Satoshi Nakamoto façon « bigos ».

Avec Thomas Bayes j’entre donc un univers essentiellement spatial et géométrique, où tout ce qui peut possiblement se passer et défini comme un rectangle ABCD et où deux balles jetées l’une après l’autre simulent les évènements dont l’occurrence m’intéresse le plus. Alors que la première balle, que Thomas Bayes appelle « W », soit jetée sur le rectangle, elle s’arrête en un point défini. On trace une ligne droite, parallèle à AD, à travers ce point. Elle coupe les côtés CD et AB en des points dénommés respectivement « s » et « o ». Voilà que mon univers se rétrécit à un rectangle plut petit, compris entre le côté AD du grand rectangle et la droite s_o. Comme je jette ma deuxième balle, dénommée « O » dans la notation originelle de Bayes, je la jette plusieurs fois, ou « n ». Si la balle O tombe dedans ce petit rectangle, entre le côté AD et la droite s_o, c’est un succès que Thomas Bayes dénomme M. Le nombre de fois que j’achève ce succès M est symbolisé avec « p », et le nombre d’échecs (pas de M, désolé) porte le symbole de q.

Avec Satoshi Nakamoto, je plonge dans un univers de transactions financières effectuées façon Blockchain, donc comme endossage consécutif garanti par une chaîne des registres dans un réseau. Selon la définition initiale de la part de Satoshi Nakamoto : « Nous considérons le scenario d’un agresseur qui essaie de générer une chaîne alternative (de transactions) plus vite que se constitue la chaîne honnête. Même si ceci est accompli, ça n’ouvre pas le système aux changements arbitraires, comme la création de valeur à partir du néant ou prendre l’argent qui n’a jamais appartenu à l’agresseur. Les nœuds du réseau ne vont pas accepter une transaction non-valide comme paiement, et les nœuds honnêtes n’accepteront jamais un registre qui les contient. Un agresseur peut seulement essayer de changer une de ses propres transactions pour reprendre l’argent qu’il a récemment dépensé ».   

L’intentionnalité est la première différence notable entre ces deux univers de probabilité : celui de Thomas Bayes et celui de Satoshi Nakamoto. La logique Bayésienne considère les évènements étudiés comme le résultat du pur hasard ou d’un processus si complexe et inconnu que de notre point de vue c’est du hasard. La logique de Bitcoin c’est un univers d’actions intentionnelles où on parle de succès ou échec dans l’accomplissement d’un objectif. Voilà du « bigos » intéressant. La deuxième différence, plus abstraite et peut-être plus subtile, est la façon de définir le succès de l’action. Chez Thomas Bayes, le succès consiste à se trouver, lorsque tout a été fait et dit, dans une gamme d’états possibles, genre entre la frontière de mon univers et une droite qui le coupe en deux. Chez Nakamoto, l’agresseur peut parler du succès si et seulement s’il accomplit un objectif très concret, c’est-à-dire s’il réussit à annuler ses propres paiements et faire revenir le pognon dans sa poche.

Si j’utilise ces deux cadres de référence pour aborder, de façon scientifique, mon idée de systèmes énergétiques locaux, avec mes quatre conditions Q(E) = D(E) = S(RE) ; P(E) ≤ PP(E) ; ROA ≥ ROA*, W/M(T1) > W/M(T0), la logique Bayésienne me dit que les valeurs de référence dans mon business plan seront plus ou moins exogènes à mes efforts : elles seront comme la position de cette première balle W. La demande d’énergie D(E), le pouvoir d’achat individuel PP(E) par rapport à cette énergie, la valeur de référence ROA* pour mon taux de retour sur actifs, ainsi que la proportion initiale W/M(T0) entre les transactions W, payées avec le Wasun, la monnaie virtuelle locale, et celles effectuées en monnaie officielle M : tout ça sera donné objectivement, plus ou moins. Alors que j’ai ces repères, je peux soit continuer dans la logique Bayésienne – et étudier la probabilité de tout un éventail des situations qui remplissent mes conditions générales – soit suivre la logique de Satoshi Nakamoto et essayer de décrire des succès et des échecs possibles en des termes très, très précis.

La logique de Thomas Bayes semble reposer, dans une large mesure, sur la lemme 1, qu’il formule juste après avoir tracé cet univers rectangulaire ABCD avec deux balles jetées dedans : « La probabilité que le point o tombera entre une paire quelconque des points sur le côté AB (du rectangle ABCD) est la proportion de la distance entre ces deux points à la longueur totale de AB ». Pour ceux qui sont juste modérément fanas des maths : une lemme est une sorte de théorème adjacent, comme instrumental au théorème principal. Une lemme est donc une hypothèse prouvée, genre en passant, dans le cadre d’une preuve plus large. Thomas Bayes offre une preuve géométrique très élaborée de cette lemme, encore que moi, personnellement, je pense qu’il est plus intéressant de démontrer le sens de cette proposition dans la vie réelle, plutôt que suivre un chemin géométrique rigoureux. Alors voilà : vous tournez le dos à un arbre et vous jetez des pierres par-dessus votre épaule, sans regarder. Vous avez une sorte d’univers derrière vous, qui est fait de toutes les endroits possibles où vos pierres peuvent atterrir. Dans cet univers, il y a comme un sous-univers fait de l’arbre. Chaque fois qu’une pierre touche l’arbre, l’évènement compte comme succès. Sinon, c’est un échec. Le bon sens dit que plus gros est cet arbre derrière vous, par rapport à votre champ de tir complet, plus grandes sont les chances que vos pierres frappent l’arbre. La logique opérationnelle derrière cette lemme est tout aussi terre-à-terre : plus larges sont les limites de ce que je définis comme succès, par rapport à la taille entière de mon univers de probabilité, plus grandes sont mes chances d’achever ce succès. Si une fille cherche un gars de haute taille comme candidat pour fiançailles, la probabilité d’en trouver un entre 175 centimètres et 2 mètres dix est plus grande que de trouver un futur père de ses enfants qui aie exactement 189 centimètres.

La logique Bayésienne implique donc que je définisse mon succès comme un éventail de situations possibles. En revanche, Satoshi Nakamoto suit une logique de séquence temporelle. Une situation a deux résultats possibles : soit l’agresseur réussit à rempocher son argent de façon frauduleuse, soit il échoue. La probabilité de Nakamoto est basée sur le nombre de pas nécessaires pour achever le résultat. Plus de nœuds dans le réseau l’agresseur devra dominer, par rapport au nombre total des nœuds, plus il lui sera difficile d’atteindre son but. Plus de nœuds honnêtes nous avons dans le réseau, en proportions à la taille totale du réseau, plus il est facile d’en garder l’intégrité financière. Nakamoto parle de séquence puisque le fait d’atteindre chaque nœud et essayer de le dominer est un pas séparé dans la séquence d’actions entreprises par l’agresseur. Remarquez : c’est la même logique de base que chez Bayes, la logique des proportions, mais représentée comme une chaîne d’évènements plutôt que comme un univers plat et statique.

En revenant à mes oignons, je peux appréhender mon concept général de ces deux façons distinctes. Je peux définir mon objectif de la façon que j’ai déjà montré – Q(E) = D(E) = S(RE) ; P(E) ≤ PP(E) ; ROA ≥ ROA*, W/M(T1) > W/M(T0) – ou bien je peux représenter ces conditions comme des séquences d’actions et les décrire en termes du nombre de pas nécessaires. Combien de clients dois-je acquérir pour pouvoir achever Q(E) = D(E) = S(RE) ? Combien de nœuds ai-je besoin de créer dans mon réseau de Wasun pour achever W/M(T1) > W/M(T0) ? Je peux aussi muter cette logique (Nakamotienne ?) un tout petit peu et remplacer la dimension temps par une dimension ressources : combien de capital je dois investir pour atteindre mes objectifs etc. ?

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418

Conversations between the dead and the living (no candles)

My today’s editorial

I have been away from blogging for two days. I have been finishing that article about technological change seen from an evolutionary perspective, and I hope I have finished, at least as the raw manuscript. If you are interested, you can download it from  Research Gate or from my own website with Word Press. Now, as the paper is provisionally finished, I feel like having an intellectual stroll, possibly in the recent past. I am tempted to use those evolutionary patterns of thinking to something I had been quite busy with a few months ago, namely to the financial tools, including virtual currencies, as a means to develop new technologies. I had been particularly interested in the application of virtual currencies to the development of local power systems based on renewable energies, but in fact, I can apply the same frame of thinking to any technology, green energy or else. Besides, as I was testing various empirical models to represent evolutionary change in technologies, monetary variables frequently poked their head through some hole, usually as correlates to residuals.

So, I return to money. For those of my readers who would like to refresh their memory or simply get the drift of that past writing of mine, you can refer, for example, to ‘Exactly the money we assume’  or to  ‘Some insights into Ethereum whilst insulating against bullshit’, as well as to other posts I placed around that time. Now, I want to move on and meddle a bit with Bayesian statistics, and more exactly with the source method presented in the posthumous article by reverend Thomas Bayes (Bayes, Price 1763[1]), which, by the way, you can get from the JSTOR library via this link . I want to both wrap my mind around Thomas Bayes’s way of thinking, and refresh my own thinking about monetary systems. I have that strange preference to organize conversations between the dead and the living (no candles), so I feel like put reverend Bayes in conversation with Satoshi Nakamoto, the semi-mythical founding father of the Bitcoin movement, whose article, that you can download by this link, from my Word Press website, contains some mathematical analysis, based on the Poisson probability.

My initial question, the one I had been wrestling with this Spring, was the following: how can a local community develop a local system of green energy, and a local virtual currency, and how can these two help the development or the transformation of said local community? Why do I bother, posthumously, revered Thomas Bayes with this question? Well, because this is what he stated as the purpose of his article. In the general formulation of the problem, he wrote: ‘Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability than can be named’. The tricky part in this statement is the ‘unknown’ part. When we studied probabilities at high school (yes, some of us didn’t take a nap during those classes!), one of the first things we were taught to do was to define exactly the event that we want to assess the probability of happening. You remember? Read balls vs. black balls, in a closed box? Rings a bell? Well, Thomas Bayes stated a different problem: how to tackle the probability that something unknown happens? Kind of a red ball cross-bred with a black ball, with a hint of mésalliance with a white cube, in family records. In the last, concluding paragraph of his essay, Thomas Bayes wrote: ‘But what recommends the solution in this Essay is that it is complete in those cases where information is most wanted, and where Mr De Moivre’s solution of the inverse problem can give little or no direction, I mean, in all cases where either p or q are of no considerable magnitude. In other cases, or when both p and q are very considerable, it is not difficult to perceive the truth of what has been here demonstrated, or that there is reason to believe in general that the chances for the happening of an event are to the chances for its failure in the same ratio with that of p to q. But we shall be greatly deceived if we judge in this manner when either p or q are small. And though in such cases the Data are not sufficient to discover the exact probability of an event, yet it is very agreeable to be able to find the limits between which it is reasonable to think it must lie, and also to be able to determine the precise degree of assent which is due to any conclusions or assertions relating to them’.

Before I go further: in the original notation by Thomas Bayes, p and q are the respective numbers of successes and failures, and not probabilities. Especially if you are a native French speaker, you might have learnt, at school, p and q as probabilities, so be on your guard. You’d better always be on your guard, mind you. You never know where your feet can lead you. So, I am bothering late reverend Bayes because he was investigating the probability of scoring a relatively small number of successes in a relatively small number of trials. If you try to launch a new technology, locally, how many trials can you have? I mean, if your investors are patient, they can allow some trial and error, but in reasonable amounts. You also never know for sure what does the reasonable amount of trial and error mean for a given investor. You have the unknown event, see? Just as Thomas Bayes stated his problem. So I take my local community, I make a perfect plan, with a plan B possibly up our local sleeve, I take some risks, and then someone from the outside world wants to assess the odds that I succeed. The logic by Thomas Bayes can be a path to follow.

Satoshi Nakamoto, in that foundational article about the idea of the Bitcoin, treated mostly the issues of security. Still, he indirectly gives an interesting insight concerning the introduction of new inventions in an essentially hostile environment. When the simulates a cyberattack on a financial system, he uses the general framework of Poisson probability to assess the odds that an intruder from outside can take over a network of mutually interacting nodes. I am thinking about inverting his thinking, i.e. about treating the introduction of a new technology, especially in a local community, as an intrusion from outside. I could threat Nakamoto’s ‘honest nodes’ as the conservatives in the process, resisting novelty, and the blocks successfully attacked by the intruder would be the early adopters. Satoshi Nakamoto used the Poisson distribution to simulate that process and here he meets reverend Bayes, I mean, metaphorically. The Poisson distribution is frequently called as the ‘probability of rare events’, and uses the same general framework than the original Bayesian development: something takes place n times in total, in p cases that something is something we wish to happen (success), whilst in q cases it is utter s**t happening (failure), and we want to calculate the compound probability of having p successes and q failures in n trials. By the way, if you are interested in the original work by Simeon Denis Poisson, a creative French, who, technically being a mathematician, tried to be very nearly everything else, I am placing on my Word Press site two of his papers: the one published in 1827 and that of 1832 (presented for the first time in 1829).

And so I have that idea of developing a local power system, based on green energies, possibly backed with a local virtual currency, and I want to assess the odds of success.  Both the Bayesian thinking, and the Poisson’s one are sensitive to how we define, respectively, success and failure, and what amount of uncertainty we leave in this definition. In business, I can define my success in various metrics: size of the market covered with my sales, prices, capital accumulated, return on that capital etc. This is, precisely, the hurdle to jump when we pass from the practice of business to its theoretical appraisal: we need probabilities, and in order to have probabilities, we need some kind of event being defined, at least foggily. What’s a success, here? Let’s try the following: what I want is a local community entirely powered with locally generated, renewable energies, in a socially and financially sustainable manner.

‘Entirely powered’ means 100%. This one is simple. Then, I am entering the dark forest of assumptions. Let’s say that ‘socially sustainable’ means that every member of the local community should have that energy accessible within their purchasing power. ‘Financially sustainable’ is trickier: investors can be a lot fussier than ordinary folks, regarding what is a good deal and what isn’t. Still, I do not know, a priori, who those investors could possibly be, and so I take a metric, which leaves a lot of room for further interpretation, namely the rate of return on assets. I prefer the return on assets (ROA) to the rate of return on equity (ROE), because for the latter I would have to make some assumptions regarding the capital structure of the whole thing, and I want as weak a set of assumptions as possible. I assume that said rate of return on assets should be superior or equal to a benchmark value. By the way, weak assumptions in science are the exact opposite of weak assumptions in life. In life, weak assumptions mean I am probably wrong because I assumed too much. In science, weak assumptions are probably correct, because I assumed just a little, out of the whole expanse of what I could have assumed.

Right. Good. So what I have, are the following variables: local demand for energy D(E), local energy supply from renewable sources S(RE), price of renewable energy P(RE), purchasing power regarding energy PP(E), and rate of return on assets (ROA). With these, I form my conditions. Condition #1: the local use of energy is a local equilibrium between the total demand for energy and the supply of energy from renewable sources: Q(RE) = S(RE) = D(E). Condition #2: price of renewable energy is affordable, or: P(RE) ≤ PP(E). Condition #3: the rate of return on assets is greater than or equal to a benchmark value: ROA ≥ ROA*. That asterisk on the right side of that last condition is the usual symbol to show something we consider as peg value. Right, I use the asterisk in other types of elaborate expressions, like s*** or f***. The asterisk is the hell of a useful symbol, as you can see.

Now, I add that idea of local, virtual currency based on green energies. Back in the day, I used to call it ‘Wasun’, a play on words ‘water’ and ‘sun’. You can look up  ‘Smart grids and my personal variance’  or  ‘Les moulins de Wasun’ (in French) in order to catch a bit (again?) on my drift. I want a local, virtual currency being a significant part of the local monetary system. I define ‘significant part’ as an amount likely to alter the supply of credit, in established currency, in the local market. I use that old trick of the supply of credit being equal to the supply of money, and so being possible to symbolize with M. I assign the symbol ‘W’ to the local supply of the Wasun. I take two moments in time: the ‘before’, represented as T0, with T1 standing for the ‘after’. I make the condition #4: W/M(T1) > W/M(T0).

Wrapping it up, any particular event falling into:

Q(RE) = S(RE) = D(E)

P(RE) ≤ PP(E)


W/M(T1) > W/M(T0)

… is a success. Anything outside those triple brackets is a failure. Now, I can take three basic approaches in terms of probability. Thomas Bayes would assume a certain number n of trials, look for the probability of all the four conditions being met in one single trial, and then would ask me how many trials (p) I want to have successful, out of n. Simeon Denis Poisson would rather have taken an interval of time, and then would have tried to assess the probability of having all the four conditions met at least once in that interval of time. Satoshi Nakamoto would make up an even different strategy. He would assume that my project is just one of the many going on in parallel in that little universe, and would assume that other projects try to achieve their own conditions of success, similar to mine or different, as I try to do my thing. The next step would to be to define, whose success would be my failure, and then I would have to compute the probability of my success in the presence of those competing projects. Bloody complicated. I like it. I’m in.

[1] Mr. Bayes, and Mr Price. “An essay towards solving a problem in the doctrine of chances. by the late rev. mr. bayes, frs communicated by mr. price, in a letter to john canton, amfrs.” Philosophical Transactions (1683-1775) (1763): 370-418